I Built a Serverless File Upload System with AWS – Here’s What Broke (and How I Fixed It)

Ever try to upload a file to AWS from your browser and get slapped with a cryptic CORS policy error? Welcome to the club. It's where dreams of clean serverless architectures go to die temporarily, until you figure out what AWS actually wants from you.
I recently built a file upload system using nothing but HTML on the frontend and a mix of AWS Lambda, API Gateway, and S3 on the backend. No EC2, no Express.js, no backend servers. Just clean, scalable, pay-as-you-go cloud. In theory.
In reality? I learned more from what broke than what worked.
Let’s walk through the journey, from the first bucket to the final “upload successful” ping from S3.
The Vision: Upload from Browser to S3, No Servers Allowed
Here was the goal: allow users to upload files, whether images, PDFs, or whatever else, from a static HTML page directly into an S3 bucket. Behind the scenes, the file would pass through an AWS API Gateway endpoint, get processed by a Lambda function, and land in my bucket. Neat and clean.
Think of it like a mailroom. The user hands a package (the file) to the receptionist (API Gateway), who gives it to the back office (Lambda), who then puts it on a shelf (S3). Sounds simple, right?
Building the Pipeline
First came the S3 bucket. I created a bucket called file-storage-annas in us-east-1. Nothing fancy with permissions yet, just the destination for every upload.
The frontend was a basic HTML page. One file input, one “Upload” button, and some JavaScript to do the magic. It read the file as Base64, posted it to the API Gateway endpoint, and added headers like Content-Type and a custom x-file-name.
Next was the Lambda function. This was the engine. It handled the CORS preflight (OPTIONS), extracted the x-file-name and file content, decoded the Base64, and called putObject using the AWS SDK to drop the file into S3.
For the API Gateway, I used HTTP API instead of REST API. It’s simpler and cheaper, with fewer chances to mess things up. I set up POST and OPTIONS routes on /upload. POST went to Lambda, OPTIONS handled CORS. Global CORS settings were configured to allow my local frontend origin at http://127.0.0.1:5500. All the methods and headers were added. Everything looked great.
Then I hit deploy.
The Chaos: What Went Wrong
CORS meltdown.
Tried uploading a file and instantly got a red error in the browser. No 'Access-Control-Allow-Origin' header is present on the requested resource.
Apparently, API Gateway acts like your CORS settings don’t exist unless you re-deploy. And even then, sometimes the OPTIONS route just ignores them entirely.
What fixed it? Deleting and recreating the OPTIONS route. Then verifying the global CORS config one more time. And yes, re-deploying after every single change. Finally, the browser calmed down.
Then Lambda broke.
I started getting an error saying require is not defined. Everything worked fine locally, but Lambda was freaking out in the cloud. The issue? I had named my handler file index.mjs, which told Node it was an ES module. But my code was using CommonJS style require statements. Lambda wasn’t having it.
The fix was hilariously simple. I renamed index.mjs to index.js and everything worked. require was happy again.
But of course, the next error hit almost immediately. Cannot find module 'aws-sdk'.
Apparently, Node.js 18.x runtimes on Lambda don’t include the AWS SDK by default anymore. Classic AWS move.
To fix that, I created a local folder, ran npm install aws-sdk, zipped it all up with index.js, and uploaded it to Lambda manually. Not elegant, but solid.
Finally, I was able to call the Lambda function. But now I was getting 400 Bad Request errors saying Missing filename or file content.
At this point, everything looked perfect on the client side. The x-file-name header was there. The file content was being sent. What now?
Here’s where things got interesting.
HTTP APIs in API Gateway don’t use explicit binary media types like REST APIs do. Instead, they infer whether it’s binary based on the Content-Type header. If you don’t send the correct MIME type from the client, your file data just won’t make it through correctly.
Once I made sure the client was sending proper types like image/png or application/pdf, Lambda started receiving the data as expected. Files were uploaded. S3 was happy.
That first successful upload felt like magic.
The Real MVP? CloudWatch Logs
CloudWatch saved me. Every single time Lambda didn’t behave or API Gateway didn’t forward something correctly, CloudWatch told me what really happened. Every error, every missing header, every typo, all laid bare in the logs. If you’re not using CloudWatch, you’re working blind.
So, Was It Worth It?
Absolutely.
I now have a working, fully serverless file upload system. It scales automatically. It’s fast. And I pay nothing when it’s idle.
Looking ahead, I’m already thinking about the next steps. Adding user authentication with Cognito so uploads aren’t open to the world. Building a UI to list and download uploaded files using the S3 API. Maybe even wrapping the whole thing in a React frontend and hosting it on Netlify or S3 static hosting. Secure file access with signed URLs could also be on the list.
Final Thoughts: Serverless Is Easy Until It’s Not
Serverless gives you so much power, but that power comes with a learning curve. You don’t manage servers, but you manage a whole lot of wiring. When something breaks, it’s often subtle and buried in a config panel or a log line you didn’t know existed.
But once you fight through the rough parts and get it working, it feels incredible. Clean, scalable, and surprisingly satisfying.
Subscribe to my newsletter
Read articles from Muhammad Annas directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
