2. AWS S3 Basic Features

Arindam BaidyaArindam Baidya
22 min read

Versioning

In this lesson, we dive into Amazon S3 versioning—a powerful feature that helps you recover from accidental deletes or overwrites. By default, new S3 buckets have versioning disabled, which means:

  • Deleting an object (e.g., file1.txt) removes it permanently.

  • Uploading a new object with the same key (e.g., file5.txt) overwrites the existing object, making any previous data unrecoverable.

Enabling versioning lets you retain, retrieve, and restore every version of an object stored in your bucket.

The image illustrates the concept of versioning with a bucket icon and three folder icons, each with a circular arrow, suggesting updates or changes.

Bucket Versioning States

You can configure versioning at the bucket level. An S3 bucket exists in one of three states:

StateDescription
UnversionedVersioning is disabled (default). New uploads overwrite existing objects without version IDs.
EnabledAll new and updated objects receive unique version IDs.
SuspendedExisting versions stay intact; new uploads behave like an unversioned bucket (null version ID).

Once you enable versioning, you can never fully turn it off—only suspend it. Suspending does not delete prior versions; it simply stops assigning new version IDs.

Enabling Versioning and Managing Object Version IDs

When versioning is Enabled:

  1. The first upload of file1.txt might get version ID 1.

  2. Re-uploading the same key creates version ID 2, preserving version 1.

  3. A third upload assigns version ID 3, and so on.

The most recent upload is the current or latest version. A GET request without versionId returns this version.

The image explains how versioning works, showing a file with multiple version IDs and a table listing the file's name, type, version ID, and last modified date.

Enabling Versioning via Console and CLI

Console:

  1. Open the S3 console.

  2. Select your bucket → PropertiesBucket VersioningEnableSave.

CLI:

aws s3api put-bucket-versioning \
  --bucket my-bucket \
  --versioning-configuration Status=Enabled

Note

A GET or LIST operation on an unversioned bucket always shows VersionId: null.

Delete Markers

With versioning enabled, deleting an object without specifying a version ID does not remove its data. Instead, S3 inserts a delete marker, which becomes the current version and hides previous versions.

The image illustrates the concept of deleting file versions, showing a "Delete Marker" and two versions of a file named "file1.txt" with different version IDs.

  • To undelete, remove the delete marker; the next latest version immediately becomes current.

  • To remove a specific version (e.g., version 2 of file1.txt), delete that version ID directly—other versions remain intact.

Pricing Considerations

Every version of an object counts towards your storage usage. You pay for the sum of all versions:

VersionSize
Version 1 of file1.txt10 GB
Version 2 of file1.txt15 GB
Total billable25 GB

The image illustrates versioning prices, showing two versions of a file named "file1.txt" with different sizes, totaling 25 GB.

Warning

Enabling versioning can significantly increase your storage costs. Implement Lifecycle rules to expire or transition older versions to cheaper storage classes.

Suspending Versioning

When you suspend versioning on a bucket:

  • Existing object versions remain stored.

  • New uploads receive a null version ID and overwrite objects as in an unversioned bucket.

S3 never purges prior versions automatically. To remove old versions, you must delete them manually or configure a Lifecycle policy.

MFA Delete

Multi-Factor Authentication (MFA) Delete adds a security layer for versioning-related operations:

  • Changing the bucket’s versioning state (Enabled/Suspended) requires MFA.

  • Permanently deleting object versions also requires MFA.

MFA Delete is only configurable via the AWS CLI.

The image explains Multi-Factor Authentication (MFA) Delete, highlighting that MFA is required to change the versioning state of a bucket and can only be enabled using CLI.

Demo Versioning

In this tutorial, you’ll explore how Amazon S3’s versioning feature affects object uploads, overwrites, and deletions. We’ll walk through three key states—versioning disabled, enabled, and suspended—and demonstrate how you can recover or permanently remove object versions.

Versioning Disabled

With versioning disabled, any object you delete is permanently removed and cannot be recovered. Overwrites simply replace the existing object.

  1. Create a new S3 bucket named Versioning Demo, leaving Bucket Versioning turned off and all other settings at their defaults.

The image shows an AWS S3 console screen with settings for blocking public access and bucket versioning options. It includes a notification about upcoming permission changes related to public access settings.

  1. Locally create a file file1.txt with:

     this is version 1
    

The image shows an Amazon S3 bucket interface with a Visual Studio Code window open, displaying a text file containing the text "this is version 1".

  1. Upload file1.txt to your bucket (all defaults).

The image shows the AWS S3 Management Console with an upload interface for adding files to a bucket named "kk-versioning-demo." A file named "file1.txt" is ready to be uploaded.

  1. Open file1.txt in the console to confirm it shows:

     this is version 1
    
  2. Permanent Delete when Disabled
    Select file1.txtDelete → type permanently delete to confirm.

Warning

Deleting objects in a bucket with versioning disabled removes them forever—there is no undelete or version history.

The image shows an AWS S3 interface for deleting objects, specifically a file named "file1.txt" with details like type, last modified date, and size. There's a prompt to confirm permanent deletion by typing "permanently delete."

  1. Re-upload the same file1.txt (version 1) to restore it.

The image shows an Amazon S3 console interface displaying details of a file named "file1.txt" within a bucket. It includes information such as the file's size, type, last modified date, and S3 URI, with a note indicating that bucket versioning is disabled.

  1. Overwrite when Disabled
    Edit file1.txt to:

     this is version 2
    

    Upload using the same key. Version 1 is lost permanently because versioning is disabled.


Enabling Versioning

Enable versioning to retain every object change with a unique Version ID. You can recover or permanently delete specific versions.

  1. In the bucket Properties, click Edit under Bucket Versioning, select Enable, and Save.

The image shows an Amazon S3 bucket properties page for "kk-versioning-demo," displaying details about bucket versioning, tags, and default encryption settings. The bucket versioning is currently disabled, and there are no tags associated with the resource.

  1. Upload file1.txt with the original content:

     this is version 1
    
  2. In the Objects view, check Show versions to reveal version history. Each version entry displays a unique Version ID.

The image shows an Amazon S3 bucket interface with a file named "file1.txt" listed, displaying details like version ID, last modified date, size, and storage class.

  1. Confirm version 1 content:

     this is version 1
    

Adding More Versions

  • Version 2: Update locally to this is version 2 and upload again.

  • Version 3: Change to this is version 3 and upload once more.

Each upload creates a new version entry. You can open each one to verify content and timestamps.

The image shows an Amazon S3 console displaying details of a file named "file1.txt" including its properties, such as size, type, and last modified date. It also includes information about bucket properties and management configurations.


Delete with Versioning Enabled

Deleting an object now places a delete marker rather than removing prior versions.

  1. Select file1.txtDelete → type delete (no “permanently delete” prompt).

The image shows an Amazon S3 interface for deleting objects, specifically a file named "file1.txt." It includes options to confirm deletion by typing "delete" in a text input field.

  1. The object disappears, but Show versions reveals:

    • A new Delete marker

    • All three prior versions

The image shows an AWS S3 interface indicating that an object has been successfully deleted, with no objects failing to delete.

  1. Restoring: Remove the delete marker by selecting it and choosing Delete → type permanently delete.

The image shows an Amazon S3 interface for deleting objects, specifically a file named "file1.txt," with a prompt to confirm permanent deletion by typing "permanently delete."

Permanently Deleting Specific Versions

You can delete individual versions without affecting others. Select a version (e.g., version 2) → Delete → type permanently delete. Only that version is removed.


Suspending Versioning

Once turned on, you can only suspend versioning, not disable it. Suspended state retains old versions but assigns null as the Version ID for new uploads.

  1. In PropertiesBucket Versioning, click Suspend and Save.

The image shows an Amazon S3 interface for editing bucket versioning settings, with options to suspend or enable versioning and a warning about the impact of changes. There is also a section for multi-factor authentication (MFA) delete, which is currently disabled.

  1. Existing versions remain accessible. New uploads use a null Version ID.

The image shows an Amazon S3 bucket interface with a list of text files, their version IDs, modification dates, sizes, and storage classes.

  1. Upload version 4 (this is version 4) and version 5 (this is version 5). Both appear with null Version IDs.

The image shows an Amazon S3 bucket interface with a list of objects and a Visual Studio Code window displaying a text file with the content "this is version 5."

New Objects under Suspension

  1. Create file2.txt with this is version 1 and upload—it gets null ID.

  2. Update to this is version 2—the previous null version is replaced.

The image shows an Amazon S3 Management Console screen where a file named "file2.txt" is being prepared for upload to a bucket named "kk-versioning-demo." The file is 17.0 bytes in size and is of type "text/plain."


MFA Delete

In the Bucket Versioning settings, you’ll see MFA Delete. Enabling this feature (via CLI or SDK) requires multi-factor authentication to change or delete versions. It cannot be turned on in the console.

The image shows an AWS S3 interface for editing bucket versioning settings, with options to suspend or enable versioning and a section for multi-factor authentication (MFA) delete. There are buttons to cancel or save changes.


Versioning States Overview

Versioning StateNew Upload BehaviorRecoverability
DisabledOverwrites existing objectsNo history, permanent deletes
EnabledNew versions with unique IDsAll versions retained; delete markers available
Suspendednull Version ID on uploadsExisting versions kept; new uploads overwrite

Cleanup

To tear down:

  1. In Objects, enable Show versions.

  2. Select all versions and markers → Delete → type permanently delete.

  3. Delete the bucket.

The image shows an Amazon S3 interface for deleting objects, listing files with details like version ID, type, last modified date, and size. There's a prompt to confirm deletion by typing "permanently delete."

Lifecycle Policies

Amazon S3 lifecycle policies help you optimize storage costs by automatically transitioning objects between storage classes or expiring them after a specified time. Define your rules once, and S3 handles the rest—no manual cleanup required.

How Lifecycle Policies Work

When you upload an object (for example, file1.txt) using S3 Standard, its access pattern may change over time. You can configure a lifecycle policy such as:

  • After 30 days: transition to S3 Standard-IA (Infrequent Access).

  • After 90 days: archive to S3 Glacier Deep Archive.

  • After 365 days: delete the object.

Lifecycle policies can target:

  • An entire bucket.

  • A subset of objects defined by prefix or tag.

  • Specific versions (if you have versioning enabled).

Note

Lifecycle rules only move objects “downhill,” from a higher-cost class to a lower-cost class.

The image is a flowchart illustrating AWS S3 lifecycle policies, showing transitions between different storage classes like S3 Standard, S3 Intelligent-Tiering, and S3 Glacier Deep Archive.

Storage Class Transition Rules

Not every storage class can transition directly to every other. The following table summarizes permitted transitions:

Source ClassAllowed Transitions
S3 StandardStandard-IA, Intelligent-Tiering, One Zone-IA, Glacier Instant Retrieval, Glacier Flexible Retrieval, Glacier Deep Archive
S3 Intelligent-TieringGlacier Instant Retrieval, Glacier Flexible Retrieval, Glacier Deep Archive
S3 One Zone-IAGlacier Flexible Retrieval, Glacier Deep Archive

Additional Constraints

When defining lifecycle rules, observe these key constraints:

  • Minimum object size
    Objects must be ≥ 128 KB to transition from Standard or Standard-IA to Intelligent-Tiering or Glacier Instant Retrieval.

  • Minimum storage duration

    • Standard → Standard-IA or One Zone-IA: 30 days in the source class.

    • After moving to Standard-IA or One Zone-IA, wait another 30 days before transitioning to any Glacier class.

Warning

Violating minimum size or duration requirements will cause your lifecycle rule to skip transitions. Always verify object metadata before applying a rule.

For a full list of constraints and examples, see the official AWS documentation.

The image is a flowchart showing the transition of AWS S3 storage classes from "S3 Standard" to various other classes like "S3 Standard-IA," "S3 One Zone-IA," and different "S3 Glacier" options, with a 30-day transition period.

Lifecycle Policies

Amazon S3 lifecycle policies help you optimize storage costs by automatically transitioning objects between storage classes or expiring them after a specified time. Define your rules once, and S3 handles the rest—no manual cleanup required.

How Lifecycle Policies Work

When you upload an object (for example, file1.txt) using S3 Standard, its access pattern may change over time. You can configure a lifecycle policy such as:

  • After 30 days: transition to S3 Standard-IA (Infrequent Access).

  • After 90 days: archive to S3 Glacier Deep Archive.

  • After 365 days: delete the object.

Lifecycle policies can target:

  • An entire bucket.

  • A subset of objects defined by prefix or tag.

  • Specific versions (if you have versioning enabled).

Note

Lifecycle rules only move objects “downhill,” from a higher-cost class to a lower-cost class.

The image is a flowchart illustrating AWS S3 lifecycle policies, showing transitions between different storage classes like S3 Standard, S3 Intelligent-Tiering, and S3 Glacier Deep Archive.

Storage Class Transition Rules

Not every storage class can transition directly to every other. The following table summarizes permitted transitions:

Source ClassAllowed Transitions
S3 StandardStandard-IA, Intelligent-Tiering, One Zone-IA, Glacier Instant Retrieval, Glacier Flexible Retrieval, Glacier Deep Archive
S3 Intelligent-TieringGlacier Instant Retrieval, Glacier Flexible Retrieval, Glacier Deep Archive
S3 One Zone-IAGlacier Flexible Retrieval, Glacier Deep Archive

Additional Constraints

When defining lifecycle rules, observe these key constraints:

  • Minimum object size
    Objects must be ≥ 128 KB to transition from Standard or Standard-IA to Intelligent-Tiering or Glacier Instant Retrieval.

  • Minimum storage duration

    • Standard → Standard-IA or One Zone-IA: 30 days in the source class.

    • After moving to Standard-IA or One Zone-IA, wait another 30 days before transitioning to any Glacier class.

Warning

Violating minimum size or duration requirements will cause your lifecycle rule to skip transitions. Always verify object metadata before applying a rule.

For a full list of constraints and examples, see the official AWS documentation.

The image is a flowchart showing the transition of AWS S3 storage classes from "S3 Standard" to various other classes like "S3 Standard-IA," "S3 One Zone-IA," and different "S3 Glacier" options, with a 30-day transition period.

Demo Lifecycle Policies

In this walkthrough, you’ll learn how to automate object transitions and expirations in an Amazon S3 bucket using lifecycle policies. We’ll cover:

  • Creating a demo bucket

  • Uploading sample objects

  • Defining lifecycle rules to transition and expire objects across storage classes

Why Use Lifecycle Policies

Lifecycle policies help optimize storage costs by automatically moving objects to lower-cost classes (e.g., Standard-IA, Glacier) or deleting them when they’re no longer needed.

Lifecycle Storage Classes Overview

Storage ClassDescriptionTypical Use Case
S3 StandardFrequent access, low latencyActive datasets
S3 Standard-IAInfrequent access, lower costBackups and long-term storage
S3 Glacier Instant RetrievalMillisecond access retrieval from GlacierArchives with occasional retrieval
S3 Glacier Deep ArchiveLowest cost, hours-long retrieval timeCompliance archives, long-term retention

1. Create a Demo Bucket and Upload Objects

  1. Open the AWS Management Console and navigate to S3.

  2. Click Create bucket, accept all defaults, and finish the wizard.

  3. In your new bucket, click Upload, then drag and drop a few test files and folders. Any sample data will do.

The image shows an AWS S3 upload interface on the left and a Windows File Explorer window on the right, displaying a list of files and folders.

  1. Open Properties for the bucket and confirm that the Storage class of your objects is Standard.

2. Configure Lifecycle Rules

Navigate to the Management tab of your bucket and click Create lifecycle rule.

The image shows an AWS S3 Management Console screen where lifecycle rules for objects are being configured, including options for filtering by prefix and object size, and setting lifecycle rule actions.

You can define multiple rules to target different prefixes (logs/, media/) or object sizes.

2.1 Rule 1: lifecycle-logs

  1. Rule name: lifecycle-logs

  2. Under Scope, select Limit the scope to specific prefixes or tags and enter:

    • Prefix: logs/
  3. (Optional) Specify Minimum size or Maximum size filters.

Current Version Transitions

  • After 30 days: transition to S3 Standard-IA

  • After 60 days: transition to S3 Glacier Instant Retrieval

The image shows an AWS S3 Management Console screen where lifecycle rules for transitioning object storage classes are being configured. It includes options for moving current versions of objects and setting transition actions after a specified number of days.

Non-Current Version Transitions

  • After 30 days: transition to S3 Standard-IA

  • After 90 days: transition to S3 Glacier Deep Archive

The image shows an AWS S3 management console screen where a lifecycle rule is being configured, including transitions to different storage classes and expiration actions for current and noncurrent object versions.

Click Create to save lifecycle-logs.

2.2 Rule 2: lifecycle-media

  1. Rule name: lifecycle-media

  2. Scope → Prefix: media/

  3. Current version transitions:

    • After 60 days: transition to S3 Standard-IA
  4. Non-current version actions:

    • After 30 days: transition to S3 Standard-IA

    • After 365 days: Expire non-current versions

Click Create to save lifecycle-media.


3. Review Your Lifecycle Configuration

Once both rules are enabled, the Lifecycle configuration page displays all active rules:

The image shows the Amazon S3 Management Console with a "Lifecycle configuration" page, displaying two lifecycle rules named "lifecycle-logs" and "lifecycle-media," both enabled.

Propagation Delay

It can take up to 24 hours for lifecycle policies to appear in the billing report and start transitions.

Static Website

In this guide, you'll learn how to serve a fully static website—HTML, CSS, JavaScript, and media—directly from an Amazon S3 bucket. We’ll cover how static hosting works, pricing considerations, and steps to configure a custom domain.

Note

Static website hosting on Amazon S3 supports only static files. If your site requires server-side processing, consider integrating Amazon EC2, Amazon ECS, or AWS Lambda.

Understanding Static Websites

When a user enters a URL, their browser sends an HTTP GET request to a web server, which responds with files that the browser renders. Common file types include:

File TypePurpose
HTMLDefines the page structure and content.
CSSStyles layout, fonts, colors, and spacing.
JavaScriptAdds interactivity and dynamic behavior on the client side.
Images & MediaProvides visual and audio assets for the page.

By uploading these files as objects in an S3 bucket and enabling static website hosting, you can serve them directly over HTTP.

The image is an infographic about static hosting, explaining that it allows access to website files through HTTP and provides a URL via S3. It notes that it's used only for static websites and requires a specific format for domain customization.

How Static Hosting Works

  1. Upload your site assets (HTML, CSS, JS, images) to an S3 bucket.

  2. In the bucket Properties, enable Static website hosting.

  3. Specify your index.html (and optional error.html) documents.

  4. Use the assigned S3 website endpoint to deliver your content.

Pricing Overview

Static hosting on S3 involves standard storage and data transfer fees, plus a small request charge. For example, on S3 Standard, GET requests cost $0.0004 per 1,000 requests.

Cost ComponentPricing (S3 Standard)
StoragePay per GB stored per month
Data Transfer (Out)Pay per GB transferred out
HTTP GET Requests$0.0004 per 1,000 requests

The image shows a pricing table for different types of data requests and retrievals, including S3 Standard and S3 Intelligent-Tiering, with associated costs per 1,000 requests and per GB.

Factor in both storage and request fees when estimating your total hosting cost.

Accessing Your Static Site

After enabling static website hosting, S3 assigns an endpoint in this format:

http://bucketname.s3-website-<region-name>.amazonaws.com

Point your users to this URL to serve your static site directly from S3.

Using a Custom Domain

To replace the default S3 URL with your own domain, configure DNS using Amazon Route 53 or another provider. Your bucket name must exactly match the domain you wish to use. For example:

In Route 53, create an Alias record (or CNAME) that maps bestcars.com to your S3 website endpoint.

Warning

Your S3 bucket name must match your custom domain (example.com). If they differ, DNS routing will fail and your site will be inaccessible.

The image illustrates a process involving a custom domain name, showing a user, a Route 53 icon, a URL (http://bestcars.com), and a server.

Demo Static Website

In this tutorial, you’ll learn how to deploy a simple static website using Amazon S3. We’ll cover:

  • Project file structure

  • Configuring S3 for static hosting

  • Setting public access and custom error pages

  • Testing your site endpoint

This guide is ideal for developers and DevOps engineers looking to serve HTML, CSS, and images directly from S3.


Project Structure

Your local directory (static-demo/) contains:

  • index.html – Main gallery page

  • index.css – Layout and styling rules

  • 404.html – Custom error page

  • images/ – JPEG photos used in the gallery

The image shows an AWS Management Console with a Windows File Explorer window open, displaying a folder named "static-demo" containing HTML files and an "images" folder.


index.html

This HTML file defines the page structure and references your CSS and images:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
  <link rel="stylesheet" href="index.css" />
  <title>Food Gallery</title>
</head>
<body>
  <div class="container">
    <div class="images">
      <!-- Nine food images displayed in a grid -->
      <div class="image"><img src="images/food1.jpg" alt="Food 1" /></div>
      <div class="image"><img src="images/food2.jpg" alt="Food 2" /></div>
      <div class="image"><img src="images/food3.jpg" alt="Food 3" /></div>
      <div class="image"><img src="images/food4.jpg" alt="Food 4" /></div>
      <div class="image"><img src="images/food5.jpg" alt="Food 5" /></div>
      <div class="image"><img src="images/food6.jpg" alt="Food 6" /></div>
      <div class="image"><img src="images/food7.jpg" alt="Food 7" /></div>
      <div class="image"><img src="images/food8.jpg" alt="Food 8" /></div>
      <div class="image"><img src="images/food9.jpg" alt="Food 9" /></div>
      <div class="image"><img src="images/food10.jpg" alt="Food 10" /></div>
    </div>
  </div>
</body>
</html>

Your accompanying index.css defines a responsive grid layout, background colors, and typography. Place all JPEGs in the images/ folder.


404.html

When users request a missing resource, S3 serves this custom error page:

<!DOCTYPE html>
<html lang="en">
<head>
  <meta charset="UTF-8" />
  <meta http-equiv="X-UA-Compatible" content="IE=edge" />
  <meta name="viewport" content="width=device-width, initial-scale=1.0" />
  <link rel="stylesheet" href="index.css" />
  <title>Page Not Found</title>
</head>
<body>
  <div class="container">
    <h1 class="head-404">404</h1>
    <h2 class="text-404">Page not found</h2>
  </div>
</body>
</html>

Note

You can preview both pages locally by opening index.html in your browser. The CSS grid will display your food gallery.


1. Create an S3 Bucket

  1. Open the Amazon S3 console and click Create bucket.

  2. Enter a unique bucket name (e.g., kk-static-demo) and select your region.

  3. Keep default settings and click Create bucket.

The image shows the AWS S3 console interface for creating a new bucket, with fields for bucket name, region selection, and object ownership settings.

Once created, locate your bucket in the list:

The image shows the Amazon S3 Management Console with a notification of a successfully created bucket named "kk-static-demo." The console displays the bucket's details, including its creation date.


2. Upload Website Files

  1. Click your bucket name to open it.

  2. Drag & drop index.html, index.css, 404.html, and the images/ folder into the console.

  3. Choose Upload and confirm.

The image shows an AWS S3 Management Console screen with a successful upload status for 13 files totaling 2.5 MB. The files listed include images and an HTML file, all marked as succeeded.


3. Enable Static Website Hosting

  1. In your bucket, navigate to PropertiesStatic website hostingEdit.

  2. Select Enable.

  3. For Index document, enter index.html.

  4. For Error document, enter 404.html.

  5. Save changes.

The image shows an Amazon Web Services (AWS) S3 bucket configuration page for setting up a static website. It includes options for enabling website hosting, specifying index and error documents, and adding redirection rules.

Review the bucket’s static hosting properties:

The image shows an Amazon S3 bucket properties page for "kk-static-demo," displaying details like bucket versioning, tags, and default encryption settings. The bucket is located in the US East (N. Virginia) region.

You should see a Bucket website endpoint listed:

The image shows an Amazon S3 console page with settings for a bucket, including options for transfer acceleration, object lock, requester pays, and static website hosting. Static website hosting is enabled, and a bucket website endpoint is provided.

At this point, clicking the endpoint yields Access Denied since the bucket is private.


4. Configure Public Access

4.1 Disable Block Public Access

  1. Go to PermissionsBlock public access (bucket settings)Edit.

  2. Uncheck Block all public access and confirm.

The image shows an AWS S3 bucket permissions settings page, where public access is blocked. The "Block all public access" option is turned on, and there is a note about the bucket policy.

4.2 Add a Bucket Policy

Still under Permissions, select Bucket policy and paste the following JSON (replace the ARN with your bucket name):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowPublic",
      "Principal": "*",
      "Effect": "Allow",
      "Action": ["s3:GetObject"],
      "Resource": ["arn:aws:s3:::kk-static-demo/*"]
    }
  ]
}

Save the policy. Alternatively, use the Add a resource UI:

The image shows a web interface for adding a resource in AWS, specifically for an S3 service, with a dropdown menu for selecting the resource type.

Your bucket will now appear publicly accessible:

The image shows an Amazon S3 bucket permissions page, indicating that the bucket "kk-static-demo" is publicly accessible, with options to block public access and a section displaying the bucket policy in JSON format.

Warning

Making your bucket public exposes all objects. Ensure only intended files are uploaded.


5. Test Your Static Website

Return to PropertiesStatic website hosting and click the endpoint link. You should see the food gallery home page.

You do not need to append /index.html—the index document is served automatically.

The image shows an Amazon S3 bucket interface with a list of objects, including HTML and CSS files, and a folder named "images." The bucket is labeled as "publicly accessible."

ResourceURL Pattern
Home pagehttp://kk-static-demo.s3-website-<region>.amazonaws.com/
Explicit indexhttp://kk-static-demo.s3-website-<region>.amazonaws.com/index.html
Specific imagehttp://kk-static-demo.s3-website-<region>.amazonaws.com/images/food1.jpg
Missing resource (404 page)http://kk-static-demo.s3-website-<region>.amazonaws.com/does-not-exist

0
Subscribe to my newsletter

Read articles from Arindam Baidya directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Arindam Baidya
Arindam Baidya

🚀 Aspiring DevOps & Cloud Engineer | Passionate about Automation, CI/CD, Containers, and Cloud Infrastructure ☁️ I work with Docker, Kubernetes, Jenkins, Terraform, AWS (IAM & S3), Linux, Shell Scripting, and Git to build efficient, scalable, and secure systems. Currently contributing to DevOps-driven projects at Assurex e-Consultant while continuously expanding my skills through hands-on cloud and automation projects. Sharing my learning journey, projects, and tutorials on DevOps, AWS, and cloud technologies to help others grow in their tech careers. 💡 Let’s learn, build, and innovate together!