Day 80 of 90 Days of DevOps Challenge: AWS Simple Storage Service

Vaishnavi DVaishnavi D
5 min read

Yesterday, on Day 79, I connected the dots with VPC Peering: understanding how AWS enables two VPCs to communicate privately without ever touching the public internet. It was like building secret tunnels between two castles: secure, hidden, and direct. I now understand the power of carefully designed private network connections.

Today, I’m shifting gears from networking to storage, specifically Amazon S3 (Simple Storage Service). If AWS were a city, S3 would be its warehouse district: vast, organized, and able to store just about anything you can imagine.

What is AWS S3?

Amazon S3 (Simple Storage Service) is AWS’s object storage service that lets you store and retrieve unlimited amounts of data from anywhere in the world. Unlike your computer’s file system, which organizes files in folders, S3 stores data as objects inside buckets.

  • Buckets → Top-level containers for your data, like the name of your warehouse.

  • Objects → The actual data (file content) plus metadata (file name, size, type).

  • Keys → The unique identifier (name) for each object within a bucket.

S3 is built for 11 nines of durability (99.999999999%), meaning your data is safer here than in most physical vaults. AWS automatically replicates your data across multiple Availability Zones, ensuring it’s secure, highly available, and always within reach.

It offers unlimited storage, strong security controls, and the ability to keep multiple versions of files, protecting you from accidental deletions or overwrites.

How AWS S3 Works

  1. Create a Bucket → Give it a globally unique name and choose a region.

  2. Upload Objects → Store files along with their metadata.

  3. Control Access → Use IAM policies, bucket policies, or Access Control Lists to decide who can see or modify data.

  4. Access Data → Via AWS Console, CLI, SDKs, or direct HTTP links.

Under the hood, AWS automatically stores multiple copies of your data across different Availability Zones in your selected region, ensuring durability and availability.

Why is AWS S3 Used?

  • Backup & Restore → Ideal for disaster recovery and archiving, ensuring your critical data is safe and can be restored anytime.

  • Static Website Hosting → Host HTML, CSS, and JavaScript files directly from an S3 bucket without needing a web server.

  • Application Data Storage → Perfect for storing user uploads, log files, media assets, and other application data.

  • Big Data Analytics → Store massive datasets and seamlessly feed them into AWS analytics services like Athena, EMR, or Redshift.

  • Compliance & Long-term Retention → Retain important records for years to meet legal or regulatory requirements.

S3 Storage Classes

S3 Standard

  • High durability & availability (99.999999999% durability, 99.99% availability).

  • Low latency and high throughput performance.

  • For frequently accessed data.

  • High cost compared to other storage classes.

  • Example: Website images, application assets, active datasets.

S3 Intelligent-Tiering

  • High durability & availability (same as S3 Standard).

  • Moves data automatically between frequent and infrequent access tiers based on usage patterns.

  • No retrieval cost for the frequent tier, but retrieval charges apply for the infrequent tier.

  • Ideal for data with unpredictable access patterns.

  • Example: User-generated content in an app where you can’t predict which files will be accessed again.

S3 Standard-IA (Infrequent Access)

  • High durability across multiple Availability Zones.

  • Cheaper than Standard for storage, but retrieval charges apply.

  • 99.9% availability.

  • Best for data that is rarely accessed but needs to be available instantly when required.

  • Example: Backup snapshots, disaster recovery files.

S3 One Zone-IA

  • Stored in a single Availability Zone (less resilient than Standard-IA).

  • Lower cost than Standard-IA.

  • 99.5% availability.

  • Use only for easily reproducible or non-critical data.

  • Example: Temporary files, secondary copies of logs.

S3 Glacier

  • Designed for archival storage.

  • Retrieval time: minutes to hours, depending on the retrieval option (Expedited, Standard, Bulk).

  • Very low storage cost, higher retrieval cost.

  • Ideal for rarely accessed compliance data.

  • Example: Financial records, old project data.

S3 Glacier Deep Archive

  • Cheapest storage option in S3.

  • Retrieval time: up to 12 hours.

  • Meant for long-term archival (7–10 years).

  • Ideal for data you must keep for legal/compliance reasons, but rarely access.

  • Example: Medical records, historical archives, regulatory compliance logs

Power Features of S3

  • Versioning → Keep track of every version of a file so you can restore previous copies after accidental changes or deletions.

  • Lifecycle Policies → Automate data management by moving objects between storage classes or deleting them after a set time.

  • Cross-Region Replication (CRR) → Automatically copy data to another AWS region for disaster recovery or compliance.

  • Event Notifications → Trigger actions (like running a Lambda function or sending an SNS alert) whenever objects are created, modified, or deleted.

S3 Pricing Optimization Tips

While S3 is cost-effective, costs can sneak up if you’re not intentional:

  • Choose the right storage class for your data’s access pattern.

  • Use lifecycle rules to automatically move older data to cheaper tiers like Glacier.

  • Enable S3 Intelligent-Tiering for unpredictable workloads.

  • Delete unused objects and old versions to save space and money.

Real-World Examples of S3

S3 is everywhere in the AWS ecosystem, from personal projects to enterprise workloads:

  • Log Storage & Analysis → Store application logs in S3 and analyze them with AWS Athena or feed them into CloudWatch.

  • Static Website Hosting → Host a React or HTML/CSS site directly from an S3 bucket, optionally paired with CloudFront for global CDN delivery.

  • Database Backups → Archive RDS snapshots into S3 Glacier for long-term compliance storage.

Final thoughts

Amazon S3 may appear to be “just a storage service” at first glance, but its capabilities extend far beyond simply holding files. It underpins a wide range of use cases from hosting static websites and storing application data to enabling complex data pipelines and supporting disaster recovery strategies.

What makes S3 so powerful is how seamlessly it integrates with the rest of the AWS ecosystem. A single file upload can trigger automated workflows, power analytics pipelines, or replicate data across the globe without manual intervention.

By understanding its storage classes, security controls, and lifecycle management features, you can leverage S3 not only as a storage solution but as a strategic component in building scalable, resilient, and cost-effective cloud architectures.

0
Subscribe to my newsletter

Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vaishnavi D
Vaishnavi D