Understanding AWS S3 Pricing, Hidden Costs & Cost-Saving Strategies


Amazon S3 (Simple Storage Service) is often the first AWS service developers interact with, and for good reason. It’s infinitely scalable, highly durable, and easy to use. Whether you're storing website assets, application logs, backups, or massive data lakes, S3 is always the answer.
But here’s the catch: while storing data in S3 might seem inexpensive at first glance, the bills can quickly balloon if you’re not paying attention. What seems like “just a few cents per GB” can quietly transform into a multi-thousand-dollar monthly bill, especially in data-heavy or high-traffic environments.
In this blog, I want to break down the full AWS S3 pricing model, highlight the commonly missed cost traps, and most importantly, provide practical tips to help you reduce your S3 bill without compromising performance or reliability.
Core AWS S3 Pricing Components
At first glance, AWS S3 seems straightforward: you pay to store data. But in reality, S3 pricing is made up of multiple components, and depending on how your application uses S3, these can add up in unexpected ways.
Let’s break it down one by one:
2.1. Storage Costs
This is the most obvious cost — the price per GB of data stored per month.
Storage Classes and Their Costs:
S3 Standard: Best for frequently accessed data.
S3 Standard-IA (Infrequent Access): Cheaper, but with retrieval charges.
S3 One Zone-IA: Lower cost, less durability (no multi-AZ redundancy).
S3 Glacier & Glacier Deep Archive: Meant for archival data with retrieval delays.
Each class has different pricing per GB, and AWS charges based on the average storage used per month.
2.2. Request & Data Retrieval Costs
AWS charges for the number of API operations (like GET, PUT, and DELETE) you perform.
PUT/COPY/POST/LIST: Charged per 1,000 requests.
GET/SELECT/Other Reads: Also charged per 1,000 requests, with Standard-IA and Glacier costing more.
For example, 1 million GET requests from S3-IA will cost significantly more than from S3 Standard.
Retrieving data from Glacier and Deep Archive adds retrieval fees depending on speed (Expedited, Standard, Bulk).
2.3. Data Transfer Costs
Inbound (uploads to S3): Free.
Outbound (downloads from S3):
Free up to 1 GB/month.
Charged beyond that based on destination — within AWS, same region, different region, internet, etc.
Cross-region replication and VPC endpoint access may also incur data transfer charges.
2.4. Lifecycle Management & Intelligent Tiering
Lifecycle Transitions: When you automatically move data between tiers (e.g., Standard → IA → Glacier), AWS charges per 1,000 transitions.
S3 Intelligent-Tiering: Automatically moves objects between access tiers, but:
Monitoring and automation add ~$0.0025 per 1,000 objects per month.
Small objects <128 KB aren’t eligible for auto-tiering but are still charged the monitoring fee.
2.5. Additional Charges
Other costs often missed:
Versioning: Stores multiple copies of the same object — each version billed.
S3 Inventory & Analytics: Useful, but creates new files (charged as storage + analytics requests).
Object Locking: For compliance, it incurs additional overhead.
Replication: Cross-region replication involves PUT and storage charges in the destination bucket.
Multipart Uploads: If incomplete, they sit and accumulate cost.
S3 Object Lambda: Extra compute = extra costs.
Let us look into a scenario of how S3 bills can be calculated:
Imagine a moderately busy SaaS analytics startup called "MandalAI", storing customer data and logs on S3. Here's their setup:
Storage Usage (Monthly Averages)
Bucket Name | Use Case | Storage Class | Data Volume | Cost/GB/Month | Est. Cost |
customer-data | Frequently accessed data | S3 Standard | 1 TB | $0.023 | $23.00 |
logs-prod | App logs (infrequently read) | S3 Standard-IA | 2 TB | $0.0125 | $25.00 |
backups | Weekly snapshots | Glacier | 3 TB | $0.004 | $12.00 |
Subtotal Storage Cost: $60.00
Request Usage (Monthly Averages)
Request Type | Source Bucket | # Requests | Cost per 1K | Est. Cost |
PUT/COPY/POST | customer-data | 5 million | $0.005 | $25.00 |
GET | customer-data | 10 million | $0.0004 | $4.00 |
GET | logs-prod | 2 million | $0.01 | $20.00 |
Lifecycle Transitions | logs-prod | 2 million | $0.01 | $20.00 |
Subtotal Request & Transition Cost: $69.00
Data Transfer Usage (Monthly Averages)
Type | Volume | Rate/GB | Est. Cost |
Outbound to the internet (logs) | 300 GB | $0.09/GB | $27.00 |
Cross-region replication (daily 100 MB) | 3 GB | $0.02/GB + PUT | ~$6.00 |
Subtotal Data Transfer Cost: $33.00
Miscellaneous Charges
Feature | Notes | Est. Monthly Cost |
S3 Inventory on 2 buckets | 1 report/day, 500K objects each | $5.00 |
Versioning on customer-data | 10% object duplication (100 GB) | $2.30 |
Multipart Uploads (left uncleaned) | 10 GB unused parts | $0.23 |
Subtotal Miscellaneous: $7.53
Estimated Monthly S3 Bill for MandalAI:
Category | Estimated Cost |
Storage | $60.00 |
Requests + Lifecycle | $69.00 |
Data Transfer | $33.00 |
Miscellaneous | $7.53 |
Total | $169.53 |
This example shows how an S3 bill can sneak past $150–200/month even with modest usage and mainly because of frequent requests, data transfer, lifecycle transitions, and unused features like multipart uploads and versioning.
Why is my S3 bill so high?
Even seasoned AWS users often get blindsided by their S3 bill. It’s not that the pricing is hidden, but it’s just that some usage patterns silently accumulate costs over time.
Let’s look at the most common traps:
Over-Reliance on the Standard Storage Class
By default, everything you upload goes to S3 Standard, the most expensive class for active storage. That’s fine for high-access workloads, but not for logs, backups, or archived data.
Many teams forget to set lifecycle policies, leaving cold data to rot in expensive storage.
Orphaned Data
Dev teams often leave behind:
Old environment backups
Test uploads
Retired dataset versions
They may be forgotten, but S3 doesn't forget and you're still billed.
Excessive Object Versions
Versioning is powerful for data protection, but without proper management:
Every version is stored and billed.
Objects with hundreds of versions (like logs or config files) quickly bloat costs.
Frequent GETs and PUTs on Tiny Objects
Microservices and apps making lots of small API calls can generate:
Millions of GET/PUT requests monthly
A significant request bill, even if object sizes are tiny
High-frequency dashboards, batch processes, or monitoring tools are common culprits.
Multipart Uploads Left Incomplete
AWS charges for incomplete multipart uploads, and if you don’t explicitly clean them up, they persist indefinitely.
These typically show up when:
Uploads fail
Automation/scripts break mid-transfer
Nobody monitors the lifecycle
Misconfigured Cross-Region Replication
Cross-region replication is amazing for resilience, but:
You pay for every replicated object (PUT, storage, and transfer)
It doubles your bill if used incorrectly or needlessly (e.g., replicating logs or temp data)
S3 Inventory & Analytics Gone Wild
These features are helpful, but:
Generate their objects
Incur storage + request + analytics costs
Can balloon when used on buckets with millions of objects without filters
Trigger-Happy Event Notifications
S3 can notify Lambda, SQS, or SNS on object events. But:
High-frequency events (e.g., real-time log ingestion) lead to excessive compute invocations
Each one may trigger downstream workflows with additional costs
Each of these issues seems small, but together, they create a cost snowball effect. Now that we’ve diagnosed the problems, let’s move on to fixing them.
Cost-Optimization Strategies & Best Practices
Now that we’ve seen how S3 bills can quietly snowball, let’s look at what you can proactively do to reduce and control costs, without compromising on performance or durability.
Use the Right Storage Class for the Right Data
Always ask: How often do I access this data? Based on that, choose the correct class:
Use Case | Recommended Class |
Frequently accessed (hot) | S3 Standard |
Occasionally accessed | S3 Standard-IA |
App logs or backups | S3 One Zone-IA / Glacier |
Archival & compliance data | Glacier Deep Archive |
Tip: Automate transitions using Lifecycle rules — e.g., move to IA after 30 days, then Glacier after 90.
Set Up Intelligent Lifecycle Policies
Use Lifecycle Rules to:
Transition data between classes based on age or last access
Delete expired or incomplete multipart uploads
Clean up noncurrent object versions
Pro Tip: Schedule these cleanups monthly — don’t wait for end-of-year purges.
Control Object Versioning
If you must enable versioning:
Set lifecycle expiration for noncurrent versions
Monitor buckets for high version count objects
Example: Keep only the last 3 versions of an object and delete older ones after 30 days.
Batch API Calls When Possible
Instead of hammering S3 with individual GET/PUT requests:
Use multipart upload batching
Fetch large lists with S3 Select or Inventory
Minimize request-based automations (e.g., real-time polling of object metadata)
Combine files when writing small logs — 1 big object costs less in requests than 1000 tiny ones.
Audit and Clean Up Stale or Unused Data
Set up a recurring task to:
Use S3 Inventory to audit old files
Delete test data or unreferenced uploads
Flag buckets growing too fast
Combine with tagging: Environment:prod
, Owner:data-team
, etc. for targeted cleanups.
Monitor with AWS Cost Explorer & CloudWatch
Use:
Cost Explorer’s S3 View to detect unusual request patterns
S3 Storage Lens for insights into object age, transitions, and bucket growth
CloudWatch Alarms on S3 API usage or budget thresholds
Bonus: Set alerts for sudden request spikes or cross-region replication traffic.
Review Cross-Region Replication
Only replicate what matters:
Critical production data
Compliance or DR-required buckets
Exclude logs, temp files, or ephemeral content from replication rules.
Automate Cleanup with CloudWatch + Lambda
Set up rules that auto-clean:
Stale incomplete multipart uploads
Old object versions
Misconfigured buckets (e.g., without lifecycle rules)
Example: Use a Lambda script triggered weekly to check for buckets without transition policies.
These strategies won’t just reduce costs, they’ll make your S3 usage sustainable and future-proof.
Subscribe to my newsletter
Read articles from Raju Mandal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Raju Mandal
Raju Mandal
A digital entrepreneur, actively working as a data platform consultant. A seasoned data engineer/architect with an experience of Fintech & Telecom industry and a passion for data monetization and a penchant for navigating the intricate realms of multi-cloud data solutions.