Demystifying Cloud Resume Challenge


A few Weeks ago, my friend LilSardarX , who works as an AVP for Citi Group (and also the one who motivated me to write my first tech blog here), casually suggested:
“Hey, you should try the Cloud Resume Challenge. It’s a fun way to get hands-on with cloud tech.”
At that point, I was just graduating, looking for projects that would actually teach me cloud skills, not just leave me copying tutorials. So I decided to give it a try.
And let me tell you—it was worth it.
🌩️ What is the Cloud Resume Challenge?
It’s a challenge designed by Forrest Brazeal, encouraging you to build and deploy your resume fully on the cloud using AWS services, CI/CD, and Infrastructure as Code. It’s beginner-friendly but pushes you to learn practical cloud engineering skills. Its available in 3 cloud platform : AWS, AZURE and GCP
Here is the link: The Cloud Reusme Challenge.
🛠️ What I Built
A personal resume website hosted on AWS.
Integrated a live visitor view counter using AWS Lambda + DynamoDB.
Automated updates with GitHub Actions (CI/CD).
Distributed globally using AWS CloudFront for fast loading.
All of this was hosted under my custom domain, making it feel professional.
🚧 My Challenges
✅ DNS and SSL: Setting up Route 53 and SSL for my domain was tricky at first, but now I know how DNS and HTTPS certificates work practically.
✅ Lambda + DynamoDB: Debugging why my visitor count wasn’t updating taught me about IAM roles and how different AWS services talk to each other.
✅ CI/CD: Automating deployments via GitHub Actions was the most satisfying part. One commit, and my site updates instantly!
💡 What I Learned
✅ Hands-on AWS Skills: S3, Lambda, API Gateway, DynamoDB, CloudFront, Route 53.
✅ CI/CD Principles: Automation matters.
✅ Debugging Cloud Architectures: Logs are your best friends.
✅ Cloud is fun when you build real things.
🖥️ How It Works (in Simple Words)
1️⃣ My resume files live in an S3 bucket.
2️⃣ CloudFront distributes them globally for speed.
3️⃣ Route 53 connects my custom domain to the website.
4️⃣ A Lambda function increments the view count whenever someone visits, storing it in DynamoDB.
5️⃣ GitHub Actions automates deployments whenever I push changes.
🛠️ Project Architecture
🖥️ How It Works (In Detail)
1️⃣ My resume files live in an S3 bucket.
Think of AWS S3 (Simple Storage Service) as a big online folder where you store your website files (HTML, CSS, images). I uploaded all the files for my resume website to an S3 bucket, enabling static website hosting so it can serve the files to the public.
This means whenever someone visits my resume, it fetches the files directly from S3.
AWS S3 acts here as Source Bucket/ Source Folder.
2️⃣ CloudFront distributes them globally for speed.
To make my resume load fast anywhere in the world, I set up AWS CloudFront, which is a Content Delivery Network (CDN).
It keeps cached copies of my resume files in multiple locations globally, reducing the load time significantly for visitors.
So, if someone from Europe visits, CloudFront serves the files from the nearest edge location instead of fetching them from the US every time.
3️⃣ Route 53 connects my custom domain to the website.
To make my website accessible under a custom domain (here akshaythakare.link), I used AWS Route 53, which is Amazon’s DNS service.
I connected my domain to the S3 + CloudFront setup by updating the DNS records in Route 53.
This allows visitors to access my resume using a clean, professional URL instead of a long AWS-generated link.
You can get your own custom domain for R53 services. I got it mine (akshaythakare.link) for just $3 per year!!
4️⃣ A Lambda function increments the view count whenever someone visits, storing it in DynamoDB.
I wanted to track how many people visit my resume, so I created a serverless AWS Lambda function that triggers when someone loads the website.
This function updates a view count stored in a DynamoDB table (a NoSQL database), effectively acting as a live visitor counter.
It was also a great learning experience to handle permissions (IAM roles - AmazonDynamoDBFullAccess) to allow Lambda to write to DynamoDB securely.
import json
import boto3
# Initialize the DynamoDB resource
dynamodb = boto3.resource('dynamodb')
# Reference the DynamoDB table where we store view counts
table = dynamodb.Table('cloud-resume-test')
def lambda_handler(event, context):
# Retrieve the current item (with 'id' = '1') from the table
response = table.get_item(Key={
'id': '1'
})
# Get the current view count from the retrieved item
views = response['Item']['views']
# Increment the view count by 1
views = views + 1
# Print the updated view count to CloudWatch logs for debugging
print(views)
# Update the item in the DynamoDB table with the new view count
response = table.put_item(Item={
'id': '1',
'views': views
})
# Return the updated view count as the Lambda response
return views
Summary of what this Lambda does:
✅ Connects to DynamoDB
✅ Fetches the current view count for id: 1
✅ Increments the view count
✅ Updates DynamoDB with the new count
✅ Returns the updated count to your front-end
5️⃣ GitHub Actions automates deployments whenever I push changes.
Instead of manually uploading updated files to S3 every time I make a change, I automated the deployment process using GitHub Actions (CI/CD). - I have made yaml code to initiate this process which I have mentioned below.
Now, whenever I push changes to my GitHub repository, GitHub Actions automatically uploads the updated files to my S3 bucket, invalidates the CloudFront cache, and makes my changes live instantly.
This makes updating my resume seamless while teaching me how real-world CI/CD pipelines work.
# Name of your GitHub Actions workflow, shown in GitHub Actions tab
name: Upload Website s3
# Trigger conditions for this workflow
on:
workflow_dispatch: # Allows manual triggering from GitHub UI
push:
branches:
- main # Triggers automatically when changes are pushed to the 'main' branch
jobs:
deploy:
runs-on: ubuntu-latest # The environment where this job will run
steps:
# Step 1: Checkout your repository's code so the workflow can access your files
- uses: actions/checkout@master
# Step 2: Use the 's3-sync-action' to sync files to your S3 bucket
- uses: jakejarvis/s3-sync-action@master
with:
args: --acl private --follow-symlinks --delete
# --acl private: keeps uploaded files private (you can change to 'public-read' if needed)
# --follow-symlinks: follows symbolic links during upload
# --delete: deletes files in the bucket that are not present in your repo, keeping S3 in sync
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }} # Your target S3 bucket (stored securely in GitHub Secrets)
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} # AWS credentials (from GitHub Secrets)
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} # AWS credentials (from GitHub Secrets)
AWS_REGION: 'us-east-1' # Region of your S3 bucket
SOURCE_DIR: './' # Directory in your repo to sync to S3 (here, the root folder)
🚀 What this YAML does:
✅ Automatically uploads your website files to S3 whenever you push to the main
branch.
✅ Can also be triggered manually from GitHub (workflow_dispatch
).
✅ Uses jakejarvis/s3-sync-action
for efficient syncing, ensuring your S3 bucket mirrors your repo.
✅ Keeps your CI/CD pipeline clean, allowing your resume site to update with one push.
/🌿 What I’m Trying Next: Terraform
Now that my Cloud Resume is live and updating automatically, I’m excited to take it a step further by learning Terraform.
Terraform is an Infrastructure as Code (IaC) tool that lets you define your cloud resources (like S3 buckets, Lambda functions, DynamoDB tables) in code instead of clicking around in the AWS console. This means you can:
✅ Recreate your entire infrastructure in minutes with a single command.
✅ Version control your infrastructure alongside your code in GitHub.
✅ Automate your environment setup for practice, demos, or portfolio showcases.
✅ Understand how real-world cloud teams manage scalable environments.
My Next Steps:
Write Terraform configuration files for:
My S3 bucket for static website hosting.
CloudFront distribution for global delivery.
Route 53 DNS records for my custom domain.
Lambda function and its IAM roles.
DynamoDB table for the visitor counter.
Test applying and destroying the infrastructure reliably.
Connect GitHub Actions to trigger Terraform automatically for updates.
By adding Terraform to this project, I will not only strengthen my cloud and DevOps skills but also learn how to manage cloud projects efficiently in a professional setting.
If you’re interested in seeing how I do this, let me know—I might write a follow-up post sharing my Terraform configuration and lessons learned!
🌱 Why You Should Try It Too
If you want to break into cloud engineering or DevOps, this challenge is a solid, real-world project for your portfolio. It not only makes your resume visible to recruiters but shows them you can build and ship cloud projects independently.
If you’re curious to check it out, here’s my live Cloud Resume!
Feel free to reach out if you’re thinking of starting yours! I’d be happy to share what helped me.
Subscribe to my newsletter
Read articles from Akshay Thakare directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
