Amazon S3 Explained: From Secure Storage to Hosting a Website

Md Sharjil AlamMd Sharjil Alam
6 min read

The Foundation of Cloud Storage

Every application, every business, and every developer faces a universal problem: “Where do I store my data?” For over a decade, Amazon Web Services has provided a powerful answer: Amazon S3.

S3 is more than just a place to put files; it’s the backbone of countless applications, from startups to global enterprises. In this article, we’ll dive deep into what S3 is, how it achieves its legendary durability, how to secure it, and best of all, we’ll walk through a hands-on lab to host your very own static website on S3.

📚 Table of Contents

  • What is Amazon S3 (Simple Storage Service)?

  • Core S3 Concepts: Buckets, Objects, and the “11 Nines”

  • Securing Your S3 Data: The Golden Rules

  • Hands-On Lab: Hosting a Static Website on S3

  • Conclusion

📦 What is Amazon S3 (Simple Storage Service)?

Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. Think of it as an infinitely large, virtual filing cabinet in the cloud where you can store and retrieve any amount of data, from anywhere on the web.

S3 is built on four key principles:

  • 💪 Scalable: Store as little as a single file or as much as petabytes of data. You never need to predict your storage needs.

  • 📈 Highly Available: Your data is stored redundantly across multiple physical locations, protecting it from failure.

  • 🔒 Secure: S3 provides robust security features to control exactly who can access your data.

  • 💵 Cost-Effective: You only pay for what you use, with different storage classes to optimize costs.

What can you store in S3?
Anything. Common use cases include:

  • Website and application assets (images, videos, scripts)

  • Backups and disaster recovery archives

  • Big data analytics and data lakes

  • Software delivery and distribution

🧩 Core S3 Concepts

To understand S3, you need to know about three things: Buckets, Objects, and its incredible durability.

  • 1. Buckets: A bucket is a container for your data. Think of it as a top-level folder. Every bucket you create must have a globally unique name across all of AWS (no two users can have a bucket named my-test-bucket).

  • 2. Objects: An object is the actual file you are storing (e.g., a photo, a document, a video). Each object consists of the file data itself and metadata (information about the file, like its last modified date and size). A single object can be up to 5TB in size.

  • 3. The Magic of “11 Nines” of Durability: You’ll often hear S3 has 99.999999999% (11 nines) of durability. What does this mean in the real world? It means that if you store 10,000,000 objects in S3, you can on average expect to lose a single object once every 10,000 years. This incredible reliability is achieved by automatically replicating your data across multiple facilities.

🔐 Securing Your S3 Data: The Golden Rules

Security is paramount, and S3 gives you granular control. Here’s how it works.

Rule #1: Everything is Private by Default.
When you create a new bucket and upload an object, no one can access it. This is a critical safety feature. To allow access, you must explicitly grant permissions.

Rule #2: “Block Public Access” is Your Master Switch.
Every S3 bucket has a “Block Public Access” setting that is ENABLED by default. This acts as a powerful, account-level and bucket-level safeguard to prevent accidental public exposure. To make a bucket public (like for a website), you must consciously disable this feature.

Rule #3: Use Policies to Grant Granular Access.
You grant permissions using policies, which are JSON documents. There are two main types:

  • IAM Policies: Attached to an IAM user, group, or role. They define what S3 actions that identity is allowed to perform (e.g., “Allow developer-jane to read from company-logs-bucket”).

  • Bucket Policies: Attached directly to an S3 bucket. They define who can access the bucket itself (e.g., “Allow everyone on the internet to read objects from my-public-website-bucket”).

🛠️ Hands-On Lab: Hosting a Static Website on S3

Let’s put this all into practice by building and hosting a simple static website.

Step 1: Create Two HTML Files

On your local computer, create two simple files: index.html and error.html.

<!-- index.html -->
<!DOCTYPE html>
<html>
<head>
  <title>My AWS S3 Website</title>
</head>
<body>
  <h1>Welcome!</h1>
  <p>This website is hosted on Amazon S3. Deployed by Sharjil!</p>
</body>
</html>
<!-- error.html -->
<!DOCTYPE html>
<html>
<head>
  <title>Oops! Page Not Found</title>
</head>
<body>
  <h1>404 - Page Not Found</h1>
  <p>Sorry, we couldn't find the page you were looking for.</p>
</body>
</html>

Step 2: Create and Configure Your S3 Bucket

  1. In the AWS Console, navigate to S3.

  2. Click “Create bucket”.

  3. Bucket name: Enter a globally unique name (e.g., sharjils-static-site- followed by random numbers).

  4. AWS Region: Choose a region close to you.

  5. Block Public Access settings: Uncheck the box for “Block all public access”. You must acknowledge that you are making the bucket public. This is necessary for a website.
    🔒 A Quick Note on Security Best Practices: Why are we doing this?In the real world, you should almost always leave “Block all public access” turned ON. It is your most important safety feature to prevent accidental data leaks.
    We are only disabling it here because our specific goal is to host a public website, which by definition must be accessible to everyone on the internet. For any other use case, like storing private application data, backups, or logs, you would keep this feature enabled and grant access using IAM roles or private access points.

  6. Click “Create bucket”.

The bucket creation screen with “Block all public access” unchecked

Step 3: Upload Your Website Files

  1. Click on your newly created bucket.

  2. Click the “Upload” button.

  3. Click “Add files” and select the index.html and error.html files you created.

  4. Click “Upload”.

Step 4: Attach a Bucket Policy to Make Objects Public

  1. Inside your bucket, go to the “Permissions” tab.

  2. Scroll down to Bucket policy and click “Edit”.

  3. Paste the following JSON policy. Remember to replace YOUR_BUCKET_NAME with your actual bucket name.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YOUR_BUCKET_NAME/*"
        }
    ]
}

4. Click “Save changes”. This policy allows anyone (Principal: “*”) on the internet to read (Action: “s3:GetObject”) any object (Resource: …/*) in your bucket.

The bucket policy editor with the JSON policy

Step 5: Enable Static Website Hosting

  1. Inside your bucket, go to the “Properties” tab.

  2. Scroll all the way down to “Static website hosting” and click “Edit”.

  3. Select “Enable”.

  4. In the Index document field, enter index.html.

  5. In the Error document field, enter error.html.

  6. Click “Save changes”.

Step 6: Visit Your New Website!

Go back to the “Static website hosting” section on the Properties tab. AWS now provides you with a Bucket website endpoint URL. Copy this URL, paste it into your browser, and you should see your live website!

📌 Conclusion

Amazon S3 is far more than simple storage. It’s a versatile, secure, and incredibly durable service that can power everything from your personal projects to massive enterprise applications.

Today, you learned how to:
✅ Create and manage S3 buckets and objects.
✅ Understand the core security principles like Block Public Access and bucket policies.
✅ Host a fully functional static website available to the entire world.

This is a fundamental skill for any cloud practitioner, and you’ve just mastered it.

🔗 Let’s Connect!

🏷️ Tags:

AWS, S3, CloudStorage, StaticHosting, DevOps, TechBlog, SharjilLearnsCloud

0
Subscribe to my newsletter

Read articles from Md Sharjil Alam directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Md Sharjil Alam
Md Sharjil Alam

🚀 DevOps & Cloud Engineer | AWS | CI/CD | Terraform | Docker | Golang | Kubernetes I'm a DevOps & Cloud Engineer passionate about automating infrastructure and building reliable, scalable cloud systems. I bring hands-on experience with AWS services, CI/CD pipelines, and Infrastructure as Code to streamline software delivery and enhance operational efficiency. From writing backend logic in Golang to provisioning cloud infra with Terraform, and deploying Dockerized apps using Jenkins, I’ve worked across the stack to integrate development and operations seamlessly. 🔧 Core Skills: DevOps: Jenkins, GitHub Actions, Docker, Ansible, Terraform Cloud: AWS (EC2, S3, IAM, Lambda, Route 53, CloudWatch) IaC & Automation: Terraform, Ansible, Shell scripting Containerization & Orchestration: Docker, Kubernetes Backend Development: Golang, REST APIs, MySQL, MongoDB Frontend (for full-stack apps): ReactJS, JavaScript, Tailwind CSS Tools: Git, GitHub, Linux, VS Code 🛠️ Project Highlights: ⚙️ Built automated CI/CD pipelines with Jenkins, Docker, and GitHub Actions ☁️ Deployed and managed staging/production environments on AWS 🔧 Provisioned cloud infrastructure using Terraform and Ansible 🧠 Wrote backend APIs in Go and connected to full-stack apps 📊 Set up IAM roles, monitoring (CloudWatch), and cloud security best practices 📚 I share learnings and tutorials on Hashnode. 📩 Let’s connect: mdsharjil32@gmail.com