How to Upload a File to Amazon S3 Using REST API

Achanandhi MAchanandhi M
8 min read

Amazon S3 became the de facto standard for storing objects due to its cheap price, and it's designed for high durability, with a 99.999999999% durability guarantee. We can talk a lot about Amazon S3, but today in this blog, let’s see how to upload a file to S3 using the REST API. I hope most of you have tried using the SDK approach with boto3, but today let’s see the different ways to upload a file to S3 using the REST API and guess what, we’ll see a demo as well.

Why Use REST API Instead of SDK?

I hope every one who is reading this blog have a question in mind why we should have to use Rest API instead of SDKs. SDKs are convenient, but there are valid cases for using REST:

  • You’re working with lightweight clients (IoT devices, embedded systems).

  • You need to integrate with systems that only support HTTP.

  • You want fine-grained control over request signing and headers.

  • You want to avoid distributing AWS credentials to clients.

I hope now we have understood why we need the REST API. Let’s see what the patterns are for building an API to upload files to Amazon S3.

There are three popular ways to upload to S3 using REST API calls:

  1. Presigned URLs with API Gateway

  2. API Gateway as a Proxy

  3. CloudFront with Lambda@Edge

  1. Presigned URLs with API Gateway

Using presigned URLs with API Gateway is a simple and secure way to let clients upload files directly to Amazon S3 without exposing your AWS credentials. Instead of sending the file through your backend, the client first calls an API Gateway endpoint (backed by a Lambda function) to request a presigned URL.

This URL is generated by the AWS SDK with a short expiration time and is tied to a specific file name and bucket. The client can then upload the file directly to S3 using that URL, reducing backend load and keeping uploads fast. Since the presigned URL includes all the necessary authentication details, no extra AWS configuration is needed on the client side, and access automatically expires after the set time.

Let’s see a demo of this approach. For this demo, I am using SAM (Serverless Application Model).

Create an app.py file.

import json
import boto3
import os
from botocore.exceptions import ClientError
from botocore.client import Config 

s3_client = boto3.client(
    's3',
    region_name='ap-south-1',
    config=Config(s3={'addressing_style': 'virtual'}),
    endpoint_url='https://s3.ap-south-1.amazonaws.com'
)
bucket_name = os.environ['BUCKET_NAME']

def lambda_handler(event, context):
    try:
        body = json.loads(event["body"])
        file_name = body.get("fileName")

        if not file_name:
            return {
                "statusCode": 400,
                "body": json.dumps({"error": "Missing fileName"})
            }

        # Generate presigned URL for PUT request
        url = s3_client.generate_presigned_url(
            'put_object',
            Params={'Bucket': bucket_name, 'Key': file_name},
            ExpiresIn=900  # URL expires in 15 minutes
        )

        return {
            "statusCode": 200,
            "headers": {"Content-Type": "application/json"},
            "body": json.dumps({"uploadUrl": url})
        }

    except ClientError as e:
        return {
            "statusCode": 500,
            "body": json.dumps({"error": str(e)})
        }

What happens here:

  • Input: Client sends { "file_name": "my-upload.txt" }

  • Processing: Lambda asks S3 for a special URL valid for 300 seconds.

  • Output: Lambda returns { "url": "<presigned_url_here>" }

Then create a template.yaml file

The template.yaml file is the blueprint of your AWS SAM (Serverless Application Model) project. It defines all the resources your application needs like Lambda functions, API Gateway endpoints, and S3 buckets in a single place. Instead of manually creating each resource in the AWS console, you describe them in this file, and SAM automatically provisions them for you. This makes your setup consistent, repeatable, and easy to share or deploy in different environments

AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: SAM template to generate S3 presigned URLs for file upload

Globals:
  Function:
    Timeout: 10
    Runtime: python3.9
    MemorySize: 128

Resources:
  S3Bucket:
    Type: AWS::S3::Bucket
    Properties:
      BucketName: !Sub "${AWS::StackName}-uploads"
      CorsConfiguration:
        CorsRules:
          - AllowedHeaders: ["*"]
            AllowedMethods: ["PUT", "GET"]
            AllowedOrigins: ["*"]
            MaxAge: 3000

  UploadRequestFunction:
    Type: AWS::Serverless::Function
    Properties:
      Handler: app.lambda_handler
      Runtime: python3.9
      Environment:
        Variables:
          BUCKET_NAME: !Ref S3Bucket
      Policies:
        - S3WritePolicy:
            BucketName: !Ref S3Bucket
      Events:
        ApiEndpoint:
          Type: Api
          Properties:
            Path: /get-presigned-url
            Method: post

Outputs:
  ApiUrl:
    Description: API Gateway endpoint
    Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/get-presigned-url"
  BucketName:
    Description: The S3 bucket for uploads
    Value: !Ref S3Bucket

First build:

sam build --template-file template.yaml

Then deploy:

sam deploy --guided

Note: Before trying this, configure your AWS credentials using the below command

aws configure

Output of SAM build and deploy

SAM Output

Wait for a few seconds; in the backend, it creates the CloudFormation script to create the resources.

SAM Output

Once the resources are created, at the end, in the output, you will get the API Gateway endpoint. Copy it; we will use this endpoint to upload the objects to S3.

Let’s assume your API Gateway endpoint is:

https://abc123.execute-api.ap-south-1.amazonaws.com/Prod/get-presigned-url

Step 1 : Get a presigned URL

curl -X POST \
  -H "Content-Type: application/json" \
  -d '{"file_name": "my-upload.txt"}' \
  https://abc123.execute-api.ap-south-1.amazonaws.com/Prod/get-presigned-url

Output example:

{
  "url": "https://your-bucket.s3.ap-south-1.amazonaws.com/my-upload.txt?...AWS query params..."
}

Step 2: Upload your file to S3

curl -X PUT \
  -T ./local-file.txt \
  "https://your-bucket.s3.ap-south-1.amazonaws.com/my-upload.txt?...AWS query params..."

If successful:

# No output, just HTTP 200

Output of the S3 Object process:

S3 Output

Once you’re done, go to the S3 dashboard and see the result.

S3 dashboard

Note: In the demo, I have used ap-south-1 as the region. If you want, you can change it. Also, the above demo I added is for sending only one particular file. If you want to send a new file, you have to follow two steps: first, get the signed URL for that file, and then send the request with the file. If you try to use the same signed URL for a different file, it won’t work in my case, it didn’t work.

How to delete the resources:

Also, please don’t forget to delete the resources you have created. It’s always a good practice to delete the resources if you don’t need them.

aws cloudformation delete-stack --stack-name <your-stack-name>

If you don’t know the stack name, go to the CloudFormation dashboard, get the stack name, and delete it. All the resources will be deleted.

The above one is just one approach; let’s see another approach as well.

  1. API Gateway as a Proxy

Using API Gateway as a proxy means letting it pass requests directly to another service like AWS Lambda, an S3 bucket, or even an external API without having to manually configure each endpoint. Instead of defining every route and method in detail, API Gateway forwards the entire request to your backend, which then decides how to handle it. This approach makes APIs more flexible, easier to maintain, and reduces the amount of configuration needed in API Gateway itself.

  1. CloudFront with Lambda@Edge

CloudFront with Lambda@Edge lets you run custom code closer to your users at AWS’s edge locations without managing servers. This means you can modify requests and responses on the fly, personalize content, handle authentication, or rewrite URLs before they even reach your backend. Since the code runs at the edge, it reduces latency and delivers a faster, more tailored experience to users across the globe.

How to Test Your APIs Without Writing Any Code

Keploy

In this blog, we saw how to send objects to S3 using the REST API. Let’s say, how do you test your APIs in general? If you want to test your APIs, how do you check whether they are working fine or not? We manually write the test cases and verify them using assertions, right?

That’s fine for small projects, but what if there are so many APIs? How do you test them all? And what about the edge cases for the APIs?

Why worry when Keploy is here for API testing? Keploy provides you with a platform to create API test cases without writing any code or interacting with any SDKs. You heard it right the Keploy API Testing platform gives you API test cases that work for your application, including edge case scenarios too.

Curious about how it works? All you have to provide is:

  1. cURL commands or a Postman collection

  2. An OpenAPI schema

  3. Your application URL (localhost also works)

Keploy will automatically create your API test cases and verify the test cases by running them against your application. In the end, you’ll have test cases that work for your application, and you can also run API testing in your CI/CD pipeline.

So, why wait? Go to app.keploy.io to create your test cases. Trust me, you will definitely like it.

Conclusion

Working with AWS services like S3, API Gateway, Lambda, and CloudFront opens up a lot of possibilities for building secure, fast, and scalable applications. Whether it’s using pre-signed URLs for direct uploads, API Gateway as a proxy, or running code at the edge with Lambda@Edge, each approach has its own advantages. The key is to choose the right tool for your specific use case so you can keep your architecture simple, cost-effective, and easy to maintain.

FAQs

1. What is a pre-signed URL in AWS S3?

It’s a temporary link that allows you to upload or download files from S3 without sharing your AWS credentials.

2. Why should I use API Gateway with Lambda?

It lets you create Serverless APIs without worrying about managing servers, scaling, or infrastructure.

3. Can I use pre-signed URLs for large files?

Yes, but for very large files you may want to use multipart uploads to avoid timeouts.

4. What’s the benefit of CloudFront with Lambda@Edge?

It lets you customize content and behavior at edge locations, reducing latency for users around the world.

5. Do I always need a template.yaml in AWS projects?

If you’re using AWS SAM or CloudFormation, yes it defines all your resources and makes deployment easier.

0
Subscribe to my newsletter

Read articles from Achanandhi M directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Achanandhi M
Achanandhi M