From Zero to Production: Build Scalable AWS Infrastructure Using Terraform Modules (VPC, EC2, RDS)


Introduction
Infrastructure as Code (IaC) isn't just a trend — it's a fundamental shift in how we design, build, and scale cloud environments. By treating infrastructure like software, teams can version, test, and deploy it with the same rigor as application code. This means faster delivery, fewer mistakes, and environments you can trust — every single time.
Among the many IaC tools out there, Terraform stands out for its simplicity, power, and cloud-agnostic design. Whether you're deploying a microservice, a full-stack app, or spinning up environments on demand, Terraform enables you to automate it all — securely and scalably.
In this hands-on guide, you’ll learn how to build a production-like AWS infrastructure using modular Terraform code — the same approach used by top DevOps teams worldwide. No fluff. Just clean, real-world infrastructure that gets the job done.
What We'll Build
Here’s a snapshot of the infrastructure we’ll be creating:
✅ A Virtual Private Cloud (VPC) with a custom CIDR block
✅ A public subnet hosting an EC2 instance for web access
✅ A private subnet hosting an RDS MySQL database
✅ Security groups to control access between services
✅ Modular, reusable Terraform code structured for scale
Architecture Overview
+------------------------+
| AWS Region |
+------------------------+
|
+---------------+
| VPC | 10.0.0.0/16
+---------------+
| |
+--------+-------+ +-----------+--------+
| Public Subnet | | Private Subnet |
| 10.0.1.0/24 | | 10.0.2.0/24 |
+--------+-------+ +-----------+--------+
| |
+------------+ +------------------+
| EC2 (Web) | | RDS (MySQL) |
+------------+ +------------------+
(public) (private)
📚 Why Use Modules in Terraform?
Modules make your code reusable, clean, and easy to maintain. Rather than writing everything in a single file, we'll break the project into three modules:
vpc
- Handles network setup (VPC, subnets, SGs)ec2
- Provisions a public EC2 instancerds
- Provisions a private RDS instance
Each module lives in its own directory with its own main.tf
, variables.tf
, and outputs.tf
.
🔗 GitHub Repo
👉 https://github.com/neamulkabiremon/terraform-aws-vpc-ec2-rds-setup
Project Structure
terraform-vpc-ec2-rds/
├── main.tf
├── variables.tf
├── outputs.tf
├── providers.tf
├── terraform.tfvars
└── modules/
├── vpc/
├── ec2/
└── rds/
Prerequisites
Before running terraform apply
, make sure you've completed the following setup steps. These are essential for secure and successful provisioning of your AWS infrastructure.
1️⃣ Create an SSH Key Pair
You'll need this key to SSH into your EC2 instance after deployment. Without this, you won’t be able to log into your server for debugging or configuration.
Run the following in your terminal:
ssh-keygen -t rsa -b 4096 -f my-ec2-key
aws ec2 import-key-pair \
--key-name "my-ec2-key" \
--public-key-material fileb://my-ec2-key.pub
Where it's used:
- The EC2 module references this via
key_name = "my-ec2-key"
inmain.tf
.
- The EC2 module references this via
How it helps:
- Ensures secure login access to your EC2 instance using a private key, instead of passwords.
2️⃣ Create an SSM Parameter for RDS DB Password
To keep your database credentials secure and out of source code, use AWS SSM Parameter Store to store the password as a SecureString
.
aws ssm put-parameter \
--name "/prod/rds/db_password" \
--value "securepassword123" \
--type "SecureString"
Where it's used:
Referenced in
main.tf
via:data "aws_ssm_parameter" "rds_password" { name = "/prod/rds/db_password" with_decryption = true }
How it helps:
Keeps sensitive information encrypted and safe.
Enables dynamic access to secrets during infrastructure provisioning.
Prevents leaking credentials in version control (Git).
3️⃣ Create an S3 Bucket + DynamoDB Table for Terraform Backend
Using a remote backend is crucial for collaborative, consistent, and recoverable infrastructure. It stores your Terraform state file securely and handles locking to avoid simultaneous changes.
aws s3api create-bucket --bucket your-terraform-state-bucket --region us-east-1
aws dynamodb create-table \
--table-name terraform-locks \
--attribute-definitions AttributeName=LockID,AttributeType=S \
--key-schema AttributeName=LockID,KeyType=HASH \
--provisioned-throughput ReadCapacityUnits=1,WriteCapacityUnits=1
Where it's used:
Defined in
backend.tf
:terraform { backend "s3" { bucket = "your-terraform-state-bucket" key = "vpc-ec2-rds/terraform.tfstate" region = "us-east-1" dynamodb_table = "terraform-locks" encrypt = true } }
How it helps:
Enables remote storage and locking of the
.tfstate
file.Prevents race conditions during concurrent
terraform apply
.Makes your infrastructure team-ready and production-friendly.
Terraform Configuration Explanation
1. main.tf
data "aws_ssm_parameter" "rds_password" {
name = "/prod/rds/db_password"
with_decryption = true
}
# Call the VPC Module
module "vpc" {
source = "./modules/vpc"
cidr_block = "10.0.0.0/16"
public_subnets = ["10.0.1.0/24", "10.0.2.0/24"]
private_subnets = ["10.0.3.0/24", "10.0.4.0/24"]
}
# Call the EC2 Instance Module
module "ec2" {
source = "./modules/ec2"
ami_id = "ami-00a929b66ed6e0de6" # Example AMI ID, replace with a valid one
instance_type = "t2.micro"
key_name = "my-ec2-key"
vpc_id = module.vpc.vpc_id
allowed_ssh_cidr = "0.0.0.0/0"
subnet_id = module.vpc.public_subnet_ids[0] # Use the first public subnet
}
# Call the RDS Module
module "rds" {
source = "./modules/rds"
db_name = "mydb"
username = "admin"
password = data.aws_ssm_parameter.rds_password.value # ✅ Secure password
subnet_ids = module.vpc.private_subnet_ids
vpc_security_group_ids = [module.vpc.rds_sg_id]
}
This is your root Terraform configuration file, and it acts as the control center for provisioning your infrastructure. In this file, we call the modules that actually build your resources:
module "vpc"
: Provisions the VPC, public subnet, private subnet, and a security group for RDS.module "ec2"
: Provisions an EC2 instance in the public subnet, using an Amazon Linux 2 AMI.module "rds"
: Provisions an RDS MySQL database in the private subnet, accessible only from the EC2 instance.
output "ec2_public_ip" {
description = "Public IP of the EC2 instance"
value = module.ec2.instance_public_ip
}
output "rds_endpoint" {
description = "The endpoint of the RDS instance"
value = module.rds.db_endpoint
}
output "vpc_id" {
description = "VPC ID"
value = module.vpc.vpc_id
}
output "public_subnet" {
value = module.vpc.public_subnet_ids[0]
}
You can output useful info like:
VPC and subnet IDs
EC2 public IP (to SSH into the instance)
RDS endpoint (to connect your app to the database)
terraform {
backend "s3" {
bucket = "your-terraform-state-bucket"
key = "vpc-ec2-rds/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
encrypt = true
}
}
provider "aws" {
region = "us-east-1"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
required_version = ">= 1.0.0"
}
2. modules/vpc
resource "aws_vpc" "main" {
cidr_block = var.cidr_block
enable_dns_hostnames = true
tags = {
Name = "main-vpc"
}
}
resource "aws_subnet" "public" {
count = length(var.public_subnets)
vpc_id = aws_vpc.main.id
cidr_block = var.public_subnets[count.index]
availability_zone = element(["us-east-1a", "us-east-1b"], count.index)
map_public_ip_on_launch = true
tags = {
Name = "public-subnet-${count.index}"
}
}
resource "aws_subnet" "private" {
count = length(var.private_subnets)
vpc_id = aws_vpc.main.id
cidr_block = var.private_subnets[count.index]
availability_zone = element(["us-east-1a", "us-east-1b"], count.index) # Change as needed
tags = {
Name = "private-subnet-${count.index}"
}
}
resource "aws_security_group" "rds" {
name = "rds-sg"
description = "Allow MySQL access"
vpc_id = aws_vpc.main.id
ingress {
from_port = 3306
to_port = 3306
protocol = "tcp"
cidr_blocks = var.public_subnets
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "rds-sg"
}
}
# Create an Internet Gateway
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main-igw"
}
}
# Create a Route Table for Public Subnets
resource "aws_route_table" "public" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = {
Name = "public-route-table"
}
}
# Associate Public Subnets with Route Table
resource "aws_route_table_association" "public" {
count = length(var.public_subnets)
subnet_id = aws_subnet.public[count.index].id
route_table_id = aws_route_table.public.id
}
output "public_subnet_ids" {
description = "List of public subnet IDs"
value = aws_subnet.public[*].id
}
output "private_subnet_ids" {
description = "List of private subnet IDs"
value = aws_subnet.private[*].id
}
output "vpc_id" {
description = "value of the VPC ID"
value = aws_vpc.main.id
}
output "rds_sg_id" {
description = "RDS Security Group ID"
value = aws_security_group.rds.id
}
variable "cidr_block" {
description = "CIDR block for the VPC"
type = string
}
variable "public_subnets" {
description = "List of public subnet CIDR blocks"
type = list(string)
}
variable "private_subnets" {
description = "List of private subnet CIDR blocks"
type = list(string)
}
This module creates:
A VPC with a given CIDR block (
10.0.0.0/16
)1 Public Subnet for EC2 (
10.0.1.0/24
)1 Private Subnet for RDS (
10.0.2.0/24
)A Security Group for RDS that allows MySQL access from the public subnet
✅ Real-life concept: VPC/subnet separation is a best practice for isolating public-facing and internal resources.
3. modules/ec2
resource "aws_security_group" "ec2_sg" {
name = "ec2-ssh-sg"
description = "Allow SSH from trusted IP"
vpc_id = var.vpc_id
ingress {
description = "SSH access from trusted IP"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = [var.allowed_ssh_cidr]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "ec2-ssh-sg"
}
}
resource "aws_instance" "web" {
ami = var.ami_id
instance_type = var.instance_type
subnet_id = var.subnet_id
key_name = var.key_name
vpc_security_group_ids = [aws_security_group.ec2_sg.id]
user_data = <<-EOF
#!/bin/bash
yum update -y
sudo dnf install -y mariadb105
echo "MySQL client installed successfully!" > /home/ec2-user/mysql-installed.log
EOF
tags = {
Name = "web-instance"
}
}
variable "ami_id" {
description = "The AMI ID to use for the EC2 instance"
type = string
}
variable "instance_type" {
description = "The instance type to use for the EC2 instance"
type = string
}
variable "vpc_id" {
description = "The VPC ID where the EC2 security group will be created"
type = string
}
variable "key_name" {
description = "SSH key name for EC2"
type = string
}
variable "subnet_id" {
description = "The subnet ID to launch the EC2 instance in"
type = string
}
variable "allowed_ssh_cidr" {
description = "CIDR block allowed to SSH into EC2"
type = string
}
output "instance_public_ip" {
description = "Public IP of the EC2 instance"
value = aws_instance.web.public_ip
}
output "security_group_id" {
description = "EC2 security group ID"
value = aws_security_group.ec2_sg.id
}
This module provisions:
One EC2 instance (Amazon Linux 2 or any custom AMI)
Located in the public subnet
Can be used as a web server or bastion host for SSH into private subnet (if needed)
✅ Real-life concept: Apps/servers go in public subnets, but connect securely to private databases.
4. modules/rds
resource "aws_db_subnet_group" "main" {
name = "main-db-subnet-group"
subnet_ids = var.subnet_ids
tags = {
Name = "db-subnet-group"
}
}
resource "aws_db_instance" "main" {
allocated_storage = 20
engine = "mysql"
engine_version = "8.0"
instance_class = "db.t3.micro"
username = var.username
password = var.password
db_subnet_group_name = aws_db_subnet_group.main.name
vpc_security_group_ids = var.vpc_security_group_ids
skip_final_snapshot = true
tags = {
Name = "my-db-instance"
}
}
variable "db_name" {
type = string
}
variable "username" {
type = string
}
variable "password" {
type = string
}
variable "subnet_ids" {
type = list(string)
}
variable "vpc_security_group_ids" {
type = list(string)
}
output "db_endpoint" {
description = "The endpoint of the RDS instance"
value = aws_db_instance.main.endpoint
}
This module provisions:
An RDS database instance (MySQL or PostgreSQL)
A DB Subnet Group using private subnets
Security group only allows traffic from the EC2 subnet or security group
✅ Real-life concept: Never expose RDS to the internet. Always use private subnets and secure SGs.
Deploy & Test Your Infrastructure
Now that all modules and configurations are ready, it’s time to deploy your infrastructure and verify everything works as expected.
✅ Step 1: Initialize Terraform
This installs the required providers and sets up your backend configuration.
terraform init
If using a remote backend (S3 + DynamoDB), make sure your bucket and table already exist.
✅ Step 2: Review the Plan
This will show what resources Terraform will create or modify.
terraform plan
Carefully review the output to ensure everything looks correct before proceeding.
✅ Step 3: Apply the Changes
Run the following command to provision your AWS infrastructure:
terraform apply -auto-approve
Copy the EC2 public IP that we need for SSH connection
Test & Verify
Once the deployment is complete, you can verify your setup using the outputs:
1. SSH into the EC2 Instance
chmod 400 my-ec2-key
ssh -i my-ec2-key ec2-user@<ec2_public_ip>
Replace <ec2_public_ip>
with the value from the ec2_public_ip
output.
Inside the instance, you can check the log file at
/home/ec2-user/mysql-installed.log
to confirm the MySQL client is installed.
2. Connect to the RDS MySQL Database
From your EC2 instance:
mysql -h <rds_endpoint> -u admin -p
Example command
mysql -h terraform-20250411090733095000000001.cdqgsvasaakx.us-east-1.rds.amazonaws.com -u admin -p
Enter the password securepassword123 you stored in SSM when prompted.
If you see the MySQL CLI, you're in! 🎉
✅ 1. If the Connection Is Successful
You'll see a welcome message like this:
Then you'll get the mysql>
prompt:
mysql>
That means you're inside the RDS MySQL server now — connection is successful. 🎉
✅ 2. Check Existing Databases
At the mysql>
prompt, run:
SHOW DATABASES;
You’ll get something like:
✅ 3. Create a Test Database (optional)
CREATE DATABASE testdb;
USE testdb;
CREATE TABLE hello (id INT PRIMARY KEY, message VARCHAR(100));
INSERT INTO hello VALUES (1, 'Hello from Terraform RDS!');
SELECT * FROM hello;
SHOW DATABASES;
This will confirm you can read/write to the database.
3. Clean Up (Optional)
When you're done testing, destroy all infrastructure to avoid AWS charges:
exit
terraform destroy --auto-approve
🎯 Final Thoughts
You've just walked through building a scalable, secure, and production-grade AWS environment using Terraform — all broken down into clean, reusable modules. This is more than just an exercise; it's a real-world approach used by leading DevOps and cloud teams to manage infrastructure efficiently and reliably.
By adopting Infrastructure as Code, you've taken a big step toward automating the cloud with confidence.
If You Found This Valuable:
⭐ Star the GitHub repo — it helps others discover the project
🔁 Share it with your team or network
💬 Drop your questions, feedback, or ideas — I’d love to hear them!
🔗 GitHub Repo
👉 https://github.com/neamulkabiremon/terraform-aws-vpc-ec2-rds-setup
🤝 Let’s Stay Connected
If you’re passionate about DevOps, Terraform, or cloud engineering, let’s connect and grow together:
🔗 LinkedIn
🐦 Twitter
🌐 (Optional) Portfolio / Blog
Subscribe to my newsletter
Read articles from Neamul Kabir Emon directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Neamul Kabir Emon
Neamul Kabir Emon
Hi! I'm a highly motivated Security and DevOps professional with 7+ years of combined experience. My expertise bridges penetration testing and DevOps engineering, allowing me to deliver a comprehensive security approach.