AWS Resume On Cloud Challenge

Links:
Resume Website: https://resume.ankitincloud.com/
GitHub Repo: https://github.com/ankit251094/aws-cloud-resume-v1/tree/main
The Cloud Resume Challenge is a fantastic way to gain hands-on experience with cloud engineering tools and technologies! It’s designed to teach you practical cloud skills by guiding you through building a resume website hosted on AWS. Along the way, you'll work with services like:
Amazon S3 (for hosting the website)
AWS Lambda (for serverless backend functionality)
DynamoDB (for storing data)
CloudFront (for content delivery and caching)
Route 53 (for DNS management)
ACM (AWS Certificate Manager Service)
Terraform ( Infrastructure as Code)
Cloud Watch ( Monitoring Logs)
GitHub ( Code Repository and Ci/CD using Github actions workflow)
Visual Studio (IDE with plugins for python, terraform, git)
This challenge helps you to practice real-world skills, such as automating infrastructure with Terraform, setting up security measures, and integrating various AWS services.
Hosting the Static website on S3
To create an index.html
and style.css
file from a resume document using a no-coding approach, you can follow these steps:
Prepare the Resume Document: Ensure your resume is in a Word document format (.doc or .docx).
Use AI Tools for Conversion:
ChatGPT or DeepSeek AI: Upload your resume document to one of these AI tools. You can then provide a command or prompt to convert the document into HTML and CSS format. For example, you might use a prompt like:
- "Please convert this resume document into HTML and CSS files."
Download the Files:
- Once the AI tool processes your request, it should provide you with downloadable
index.html
andstyle.css
files.
- Once the AI tool processes your request, it should provide you with downloadable
Review and Edit:
- Open the downloaded files in a text editor or an IDE like Visual Studio Code to review the content. Make any necessary adjustments to ensure the formatting and styling meet your expectations.
S3 Bucket Creation :
- Create a S3 Bucket **ankit-cloud-resume-challenge ,**with S3 static website hosting option enabled and block all public access set to ON to prevent unauthorized access.
Deploy:
- Use these files to host your static website on Amazon S3 as part of the Cloud Resume Challenge.
Amazon S3 + Amazon CloudFront: A Match Made in the Cloud
Amazon S3 and CloudFront work together to store, secure, and deliver static content at scale. CloudFront caches content at edge locations, reducing the load on your S3 bucket and improving response times for users. Geographic restrictions allow you to limit access to specific regions, such as the USA and Canada. Additionally, Origin Access Identity (OAI) ensures that only CloudFront can access your S3 bucket, preventing direct access to your content and enhancing security. This combination provides fast, secure, and controlled delivery of your website and assets.
An Alternate Domain Name (CNAME) in Amazon CloudFront is an optional setting that allows you to use your own custom domain name (https://resume.ankitincloud.com/) instead of the default CloudFront domain name.
A SSL certificate will be needed to be requested from AWS Certificate Manager (ACM) then the certificate needs to be validated using either DNS validation or Email validation. I chose the DNS validation as it was easier and we need to add the CNAME to Route 53 to validate it , this option comes up in the ACM.
Route domain name to Cloud Front: AWS Route 53
Route 53 is a domain registrar and DNS service provided by AWS. In your setup, I purchased the domain ankitincloud.com using Route 53. To point the custom DNS domain name (resume.ankitincloud.com) to the CloudFront distribution, I had to create an A record in Route 53. This A record will route traffic from your custom domain to the CloudFront distribution, ensuring that requests to resume.ankitincloud.com are directed to the correct resources hosted on CloudFront.
Dynamo DB to store and retrieve visitor count
Create a Dynamo DB table resume-view-counter with id as the partition key and views as an attribute which will be updated by AWS Lambda function every time the website is visited by any one.
Lambda Function to communicate with Dynamo DB
Create a Lambda function that accepts requests from your web application and interacts with the DynamoDB database through a Lambda Function URL. The Lambda Function URL will be invoked in the JavaScript section of your code. To secure the URL and prevent unauthorized access, configure CORS to allow only requests from resume.ankitincloud.com and validate that the Referer header starts with your domain before processing any request.
This Lambda URL retrieves the number of times the resume website has been visited, increments the count after each visit, and displays it in the profile view section.
Lambda Code in Python
import json
import boto3
import os
from decimal import Decimal
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('resume-view-counter')
def lambda_handler(event, context):
headers = event.get('headers', {})
# Get Referer
referer = headers.get('referer', '')
# Allow only if Referer starts with your domain
if not referer.startswith('https://resume.ankitincloud.com'):
return {
'statusCode': 403,
'headers': {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Headers': '*'
},
'body': json.dumps({'message': 'Forbidden: Invalid Referer'})
}
# Get the current count
response = table.get_item(Key={ 'id': '1' })
views = response['Item']['views']
# Convert Decimal to int for safe arithmetic
views = int(views) + 1
print(f"Updated views: {views}")
# Update the count in the table
table.put_item(Item={ 'id': '1', 'views': views })
return {
'statusCode': 200,
'body': json.dumps({"count": views})
}
Implementing CI/CD with GitHub Actions
GitHub Actions is a powerful tool to automate your CI/CD workflows directly within your repository. By defining workflows in a .yml
file, you can automate tasks like building, testing, and deploying your application.
Steps to Set Up a CI/CD Pipeline
In your repository, create a
.github/workflows
directory.Add a
.yml
file, for example,frontend_cicd.yml
.The workflow should be triggered whenever there is a push to the
main
branch.This will automatically deploy the frontend code to S3 bucket when pushed to git main branch.
Add secret_id and secret_key under Settings → Secrets and variables →Actions → Repository secrets.
Workflow Logic
name: Upload Website to S3
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@master
- uses: jakejarvis/s3-sync-action@master
with:
args: --follow-symlinks --delete
env:
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: 'us-east-1'
SOURCE_DIR: 'website'
IAM Roles, Execution Role and user role
Lambda Execution Role is needed to have AmazonDynamoDBFullAccess and logs:CreateLogGroup, logs:CreateLogStream and logs:PutLogEvents ie it should have permission related to cloud watch.
The S3 Bucket should be defined in such a way that Cloud Front should have s3:GetObject permission for the bucket.
Post this the website was live and accessible on https://resume.ankitincloud.com/
Infrastructure As Code - Terraform
Just another way to create AWS resources without using the UI, I tried this for learning and hands-on experience with Terraform.
Terraform helps to create AWS resources on cloud using code, helps to manage infrastructure, standardize deployment flow. Created AWS Lambda with function url, Dynamo DB, S3 Bucket using the HCL (HashiCorp Configuration Language),which is the language Terraform uses.
variable.tf
# Including the variable file makes the terraform configuration more dynamic
variable "s3_bucket_name" {
description = "S3 bucket Name"
type = string
default = "aws-cloud-resume-ankit-v2"
}
variable "iam_policy_name" {
description = "My first terraform policy"
type = string
default = "first-terraform-iam-policy"
}
variable "iam_user_name" {
description = "My first terraform iam user"
type = string
default = "first-terraform-user"
}
variable "dynamo_db_table_name" {
description = "DynamoDB table name"
type = string
default = "visiter-counter"
}
variable "s3_bucket_name_lambda" {
description = "S3 bucket Name for Lambda"
type = string
default = "aws-cloud-resume-ankit-v2-lambda"
}
main.tf
#terraform{} block contains the provider terraform will use to provision infra
terraform {
# Store Remote State in Terraform Cloud
# This will create a workspace in Terraform Cloud ( HCP)
cloud {
organization = "AWS-Terraform-Tutorial-Ankit-Pandey"
workspaces {
name = "learn-terraform-aws"
}
}
#terraform installs the provider from terraform registry
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.16"
}
}
required_version = ">= 1.2.0"
}
#provider block provides plugin used by terraform to create and manage resources
provider "aws" {
region = "us-east-1"
}
#resource block defines the components of the infrastructure
#resource block has 2 strings before the block : resource type and resource name
#together the resource type and resource name form a unique ID for the resource
resource "aws_s3_bucket" "my_bucket" {
bucket = var.s3_bucket_name
}
#The aws_iam_policy_document data source uses HCL to generate a JSON representation of an IAM policy document.
#Writing the policy as a Terraform configuration has several advantages over defining your policy inline in the aws_iam_policy resource.
data "aws_iam_policy_document" "s3_policy" {
statement {
actions = ["s3:ListAllMyBuckets"]
resources = ["arn:aws:s3:::*"]
effect = "Allow"
}
statement {
actions = ["s3:*"]
resources = [aws_s3_bucket.my_bucket.arn]
effect = "Allow"
}
}
resource "aws_iam_policy" "iam_policy" {
name = var.iam_policy_name
policy = data.aws_iam_policy_document.s3_policy.json
}
resource "aws_iam_user_policy_attachment" "attachment" {
user = aws_iam_user.new_user.name
policy_arn = aws_iam_policy.iam_policy.arn
}
resource "aws_iam_user" "new_user" {
name = var.iam_user_name
}
#resource "aws_dynamodb_table" "dynamo-visitorcounter" - Terraform creates a DynamoDB table on AWS
#aws_dynamodb_table → tells Terraform you are creating a DynamoDB table.
#dynamo-visitorcounter → this is just the Terraform name (internal name inside your Terraform code)
resource "aws_dynamodb_table" "dynamo-visitorcounter" {
name = "visitor-counter"
billing_mode = "PAY_PER_REQUEST"
hash_key = "id"
attribute {
name = "id"
type = "S"
}
}
resource "aws_s3_bucket" "lambda_bucket" {
bucket = var.s3_bucket_name_lambda
}
#This configuration uses the archive_file data source to generate a zip archive
data "archive_file" "lambda_my_func" {
type = "zip"
source_dir = "${path.module}/lambda"
output_path = "${path.module}/lambda.zip"
}
#aws_s3_object resource to upload the archive to your S3 bucket.
resource "aws_s3_object" "lambda_my_func" {
bucket = aws_s3_bucket.lambda_bucket.id
key = "lambda.zip"
source = data.archive_file.lambda_my_func.output_path
# File Fingerprint
#filemd5(...) → calculates the MD5 hash (a unique ID) of a file.
#So Terraform is creating a checksum (fingerprint) of your Lambda code .zip.
#This helps Terraform know "if the file changes", then it needs to update/redeploy the Lambda automatically.
etag = filemd5(data.archive_file.lambda_my_func.output_path)
}
#configure lambda
resource "aws_lambda_function" "lambda_my_func" {
function_name = "myFunc"
s3_bucket = aws_s3_bucket.lambda_bucket.id
s3_key = aws_s3_object.lambda_my_func.key
runtime = "python3.9"
handler = "myFunc.lambda_handler"
timeout = 15
memory_size = 128
#source_code_hash attribute will change whenever you update the code contained in the archive, which lets Lambda know that there is a new version of your code available.
source_code_hash = data.archive_file.lambda_my_func.output_base64sha256
# a role which grants the function permission to access AWS services and resources in your account.
role = aws_iam_role.lambda_exec.arn
}
#defines a log group to store log messages from your Lambda function for 3 days.
#By convention, Lambda stores logs in a group with the name /aws/lambda/<Function Name>.
resource "aws_cloudwatch_log_group" "lambda_my_func" {
name = "/aws/lambda/${aws_lambda_function.lambda_my_func.function_name}"
retention_in_days = 3
}
#defines an IAM role that allows Lambda to access resources in your AWS account.
resource "aws_iam_role" "lambda_exec" {
name = "serverless_lambda"
assume_role_policy = jsonencode({
Version = "2012-10-17"
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Sid = ""
Principal = {
Service = "lambda.amazonaws.com"
}
}
]
})
}
#attaches a policy to the IAM role.
resource "aws_iam_role_policy_attachment" "lambda_policy" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
resource "aws_iam_role_policy_attachment" "lambda_dynamoroles" {
role = aws_iam_role.lambda_exec.name
policy_arn = "arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess"
}
resource "aws_lambda_function_url" "my_lambda_url" {
function_name = aws_lambda_function.lambda_my_func.function_name
authorization_type = "NONE" # or "AWS_IAM" if you want auth
}
output.tf
output "output_s3_bucket_name_arn" {
description = "ARN of S3 Bucket Created"
value = aws_s3_bucket.my_bucket.arn
}
output "output_iam_policy_name" {
description = "Pplicy ID of IAM "
value = aws_iam_policy.iam_policy.policy_id
}
output "output_dynamo_db_table_name" {
description = "DynamoDB table name"
value = aws_dynamodb_table.dynamo-visitorcounter.name
}
output "lambda_bucket_name" {
description = "Lambda S3 Bucket name"
value = aws_s3_bucket.lambda_bucket.bucket
}
output "function_name" {
description = "Name of the Lambda function."
value = aws_lambda_function.lambda_my_func.function_name
}
HCP Terraform lets you store the state remotely on Terraform Cloud, making it easy for teams to version, audit, and collaborate on infrastructure changes. It also securely stores variables like API tokens and access keys, providing a safe and stable environment for long-running Terraform processes.
Workspace in HCP
Resources created in HCP
HCL runs via CLI
Subscribe to my newsletter
Read articles from Ankit Pandey directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
