☁️ How I Built a Fully Automated AWS Infrastructure with Terraform to Deploy a Node.js App (Free Tier Friendly)

Aubrey T DubeAubrey T Dube
6 min read

🧠 Intro: Why I Built This Project

As a hands-on cloud practitioner, I wanted to build more than just a static site or clone a To-Do app. I wanted to provision real AWS infrastructure from scratch — not by clicking around the AWS console, but by writing clean, reusable Terraform code.

The goal?
👉 Automate everything — from spinning up an EC2 server to deploying a working Node.js app connected to a MySQL database and S3 bucket.

And the best part? It all runs within the AWS Free Tier — making it perfect for anyone starting out in cloud or DevOps.

This project provisions infrastructure using Terraform, a popular Infrastructure as Code (IaC) tool.

🔧 What I Used (Tech Stack)

Tool / ServiceWhat It Does
TerraformInfrastructure as Code
AWS EC2EC2 runs the Node.js app
AWS RDS (MySQL)Amazon RDS is a managed relational database
AWS S3S3 stores files and objects (Optional for files)
VPC & Security GroupsSecure, isolated network setup
GitHubHosts the Node.js app repo
VS CodeLocal development

📐 Project Architecture

Here’s a simple high-level architecture of how everything fits together:

🎯 What This Project Does

Once I run terraform apply, the project does the following automatically:

  1. Creates a VPC with public subnets

  2. Provisions an EC2 instance with SSH and HTTP access

  3. Spins up an RDS MySQL instance (inside a private SG)

  4. Creates an S3 bucket for file storage

  5. Runs a user_data script on EC2 to:

    • Install Node.js & dependencies

    • Clone my GitHub repo

    • Inject .env config using Terraform variables

    • Start the server

After provisioning, the app is available at:

http://<my-ec2-ip>:3000


🛠️ How I Built It (Step by Step)


🔑 Step 1: Define Infrastructure with Terraform

I wrote modular, clean Terraform code to keep it maintainable. Here's the basic structure:

terraform/
├── ec2.tf
├── s3.tf
├── rds.tf
├── provider.tf
├── variables.tf
├── outputs.tf
├── modules/
│   └── ec2-security-group/


☁️ Step 2: Automate EC2 with user_data

To ensure everything worked on first boot, I added a user_data script in ec2.tf. It handles:

  • Installing Node.js and npm

  • Cloning the app repo from GitHub

  • Writing RDS details to .env

  • Running npm install

  • Starting the app with node app.js

📦 This makes the entire process zero-touch after terraform apply.

script.sh or within ec2.tf

user_data  = <<-EOF
              # Install Node.js
              sudo apt update -y
              sudo apt install -y nodejs npm

              # edit env vars
              echo "DB_HOST=${local.rds_endpoint}" | sudo tee .env
              echo "DB_USER=${aws_db_instance.tf_rds_instance.username}" | sudo tee -a .env
              sudo echo "DB_PASS=${aws_db_instance.tf_rds
              EOF

🔒 Step 3: Secure Communication with Custom Security Groups

I created two labeled security groups:

  • sg-ec2: Allows inbound SSH (port 22) and app traffic (port 3000)

  • sg-rds: Only allows inbound MySQL (port 3306) from the EC2 instance's private IP


🖥️ Step 4: SSH into your EC2 instance:

Use your key pair:

bash

ssh -i C:\Users\Thabo\.ssh\tf_greykeypair.pem ubuntu@54.162.183.106

🗃️ Step 5: Connect EC2 to RDS

Once provisioned, the EC2 instance can connect securely to RDS using the .env config:

bash

mysql -h nodejs-rds-mysql.c4j82awmoz78.us-east-1.rds.amazonaws.com -u admin -p

Then I ran:


🌐 Step 5: Confirm App in Browser

Once the app is running, I opened: http://54.144.246.143:3000, 54.144.246.143 is the IP of the instance


📤 (Optional) Step 6: S3 Bucket for File Storage

I also provisioned an S3 bucket using Terraform, which can be used by the app for file uploads (if needed).

s3.tf

# create bucket resource
resource "aws_s3_bucket" "tf_s3_bucket" {
  bucket = "nodejsbucket007"
  tags = {
    Name        = "Nodejs terraform S3 Bucket"
    Environment = "Dev"
  }
} 

# add objects into bucket
resource "aws_s3_object" "tf_s3_object" {
  bucket = aws_s3_bucket.tf_s3_bucket.bucket
  for_each = fileset("../public/images", "**")
  key    = "images/${each.value}"
  source = "../public/images/${each.key}"
}

🧼 Step 7: Clean-Up

To tear everything down and avoid charges:

bash

terraform destroy

🎓 What I Learned

🧠 Terraform forces you to think like an infrastructure architect
When writing Terraform, I wasn’t just automating tasks — Its about the system design. Every decision, from how security groups communicate to how VPC networking flows, requires architectural thinking. It gave me a deeper appreciation of how cloud infrastructure really works under the hood.

⚠️ user_data is powerful — but debugging it is non-trivial
Since user_data runs once and silently on EC2 launch, troubleshooting issues meant SSH-ing into the instance and reading logs like /var/log/cloud-init-output.log. This taught me to write cleaner, fail-safe shell scripts and pre-validate each step.

🔁 Terraform is declarative, not procedural — and that matters
Unlike typical programming, you don’t write steps in order — you describe what you want, and Terraform figures out how to get there. I had to think in terms of resource dependencies, outputs, and relationships — not execution order. This mental shift was challenging but rewarding.

🔐 Security Groups are your silent gatekeepers
At one point, my app couldn’t connect to the database. It turned out the issue wasn’t with the app — it was with a misconfigured security group. This taught me that access control via SGs is subtle but critical. I learned to create labeled groups like sg-ec2 and sg-rds with least-privilege principles.

📦 Infrastructure is code — but it’s also documentation
Writing clean, modular .tf files made me realize that Terraform isn’t just automation — it’s a living blueprint of your system. Anyone can read my code and understand how the infrastructure is designed. It pushed me to write better comments and use descriptive naming.

🧪 Testing infra is harder than testing code — but possible
There’s no npm test for infrastructure. I learned to use terraform plan like a test runner, validate changes incrementally, and test functionality manually by logging into EC2 and verifying things work. It gave me new appreciation for monitoring and automation.


📁 GitHub Repo

You can view the full source code and setup here:

View on GitHub

🙏 Credits

project idea Inspired by Verma-Kunal


💬 Final Thoughts

This project is a great example of how DevOps and software deployment meet in real-world cloud workflows. If you're learning AWS, Terraform, or cloud infrastructure in general — try replicating and extending this. Add CI/CD, monitoring, or containerize it next!

Thanks for reading 🙌
Feel free to connect or drop feedback on Aubrey T Dube LinkedIn or GitHub


🔜 What's Next?

  • Add CI/CD pipeline (maybe with GitHub Actions)

  • Add CloudWatch logging and monitoring


0
Subscribe to my newsletter

Read articles from Aubrey T Dube directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Aubrey T Dube
Aubrey T Dube

Welcome to the GreyStack by Aubrey T Dube - A blog where software engineering, cloud data engineering and AI intersect.