AWS VPC with Terraform (EC2 + ALB)

Pre-requisite
Before diving into this blog, you should have a basic understanding of how to create a VPC manually using the AWS Management Console. For reference, you can check out this blog.
Content of this blog
Provider Setup
VPC Setup
Subnet Configuration
Internet Gateway Setup
Route Table & Associations
Security Group Setup
Launching EC2 Instances
Application Load Balancer (ALB)
Target Group & Attachments
Load Balancer Listener
Output: Load Balancer DNS
In this blog, we’ll walk step by step through setting up this AWS infrastructure using Terraform, and by the end, you’ll have the complete setup running.
In this architecture, we will:
Set up a VPC (Virtual Private Cloud) as the networking foundation.
Create subnets inside the VPC to logically separate resources.
Launch EC2 instances inside those subnets.
Place a Load Balancer in front of the EC2 instances to distribute traffic evenly.
This way, we’ll have a fully functional infrastructure with a secure network, compute resources, and traffic management.
1. Provider Setup
Before setting up our VPC, we first need to configure a provider. Terraform supports many providers such as AWS, Azure, and GCP, but in this blog we’ll be working with the AWS provider to define our infrastructure
provider "aws" {
region = "ap-south-1"
}
In the above code, we’ve specified the provider as AWS and set the region to ap-south-1. You can replace this with your preferred AWS region based on where you want to deploy your resources . As a good practice, we generally create a separate file named provider.tf
to define our provider configuration, which helps keep the Terraform code clean and organized
2. VPC Setup
After completing the provider setup, we will now move on to creating the VPC. From this point onward, all our configurations will be written inside the main.tf
file.
resource "aws_vpc" "myvpc" {
cidr_block = "10.0.0.0/16"
}
In the above code, we’ve created a resource block for a new VPC.
A resource block is used to define a piece of infrastructure (like VPCs, subnets, EC2 instances, etc.) that Terraform will manage.
aws_vpc
→ tells Terraform we are creating an AWS VPC resource."myvpc"
→ is the local name (Terraform identifier), not the actual AWS name. We’ll use this reference in other resources later.cidr_block = "10.0.0.0/16"
→ defines the IP range for the VPC, which can hold around 65,536 IP addresses.
3. Subnet Configuration
In this step, we will be creating subnets inside the VPC that we configured earlier. These subnets will host our EC2 instances later
resource "aws_subnet" "sub1" {
vpc_id = aws_vpc.myvpc.id
cidr_block = "10.0.0.0/24"
availability_zone = "ap-south-1a"
map_public_ip_on_launch = true
}
resource "aws_subnet" "sub2" {
vpc_id = aws_vpc.myvpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "ap-south-1b"
map_public_ip_on_launch = true
}
Here, we are creating two subnets (sub1
and sub2
) inside our VPC. These will be used to launch EC2 instances later.
aws_subnet
→ tells Terraform we are creating an AWS subnet resource.vpc_id = aws_vpc_myvpc.id
→ links the subnet to our previously created VPC (myvpc
). We reference the VPC by using its Terraform local name and extract itsid
.cidr_block = "10.0.1.0/24"
→ defines the IP address range for the subnet. For example,10.0.0.0/24
allows 256 IPs (251 usable after AWS reserves a few).availability_zone ="ap-south-1a"
→ we’ve spread subnets across two different AZs (ap-south-1a
andap-south-1b
) to improve high availabilitymap_public_ip_on_launch = true
→ ensures that any EC2 instance launched in this subnet will automatically get a public IP, making it internet-accessible.
4. Internet Gateway Setup
In this step, we are going to configure an Internet Gateway (IGW). This will allow our VPC and its resources (like EC2 instances in public subnets) to access the internet.
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.myvpc.id
}
aws_internet_gateway
→ tells Terraform we are creating an Internet Gateway resource.vpc_id = aws_vpc.myvpc.id
→ attaches the Internet Gateway to our previously created VPC.
Through this Internet Gateway, any subnet that is associated with a route table pointing to this IGW will have internet connectivity.
5. Route Table & Associations
In this step, we are going to create a Route Table that defines how traffic flows within our VPC.
First, we will create a route table and add a route that sends all outbound traffic (
0.0.0.0/0
) to the Internet Gateway.Then, we will associate this route table with our subnets, so that resources inside those subnets can access the internet.
resource "aws_route_table" "RT" {
vpc_id = aws_vpc.myvpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
}
resource "aws_route_table_association" "rta1" {
subnet_id = aws_subnet.sub1.id
route_table_id = aws_route_table.RT.id
}
resource "aws_route_table_association" "rta2" {
subnet_id = aws_subnet.sub2.id
route_table_id = aws_route_table.RT.id
}
aws_route_table
→ creates a route table in our VPC.route { cidr_block = "0.0.0.0/0" }
→ means “send all outbound traffic to the internet.”gateway_id = aws_internet_gateway.igw.id
→ points the traffic to our Internet Gateway.aws_route_table_association
→ attaches this route table to our subnets, enabling internet connectivity for them.
Here’s a picture for better visual understanding of the route table and internet gateway setup
6. Security Group Setup
In this step, we are going to create a Security Group inside our VPC.
A Security Group acts as a virtual firewall for your EC2 instances, controlling both inbound (ingress) and outbound (egress) traffic.
First, we will create a Security Group within our custom VPC.
Then, we will add inbound rules to allow HTTP (port 80) and SSH (port 22).
Finally, we will add an outbound rule to allow all traffic so the EC2 instances can reach the internet.
resource "aws_security_group" "websg" {
name = "web"
vpc_id = aws_vpc.myvpc.id
ingress {
description = "HTTP from VPC"
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
ingress {
description = "SHH"
from_port = 22
to_port = 22
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
aws_security_group
→ creates the Security Group inside our VPC.Inbound (Ingress) Rules:
Port 80 (HTTP) → allows web traffic so we can access the application in the browser.
Port 22 (SSH) → allows us to log in to the EC2 instance remotely.
Outbound (Egress) Rule:
protocol = "-1"
with0.0.0.0/0
→ allows all outgoing traffic, so the EC2 can reach the internet (e.g., download updates, connect to APIs)
7. Launching EC2 Instances
Now we will launch two EC2 instances in our VPC each placed in a different subnet.
We’ll also attach the security group created earlier and provide user data scripts so that each instance serves a simple HTML page.
resource "aws_instance" "webserver1" {
ami = "ami-0f918f7e67a3323f0"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.websg.id]
subnet_id = aws_subnet.sub1.id
user_data_base64 = base64encode(file("userdata.sh"))
}
resource "aws_instance" "webserver2" {
ami = "ami-0f918f7e67a3323f0"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.websg.id]
subnet_id = aws_subnet.sub2.id
user_data_base64 = base64encode(file("userdata1.sh"))
}
ami
→ The Amazon Machine Image ID to use.I used an Ubuntu AMI.
AMIs are region-specific → you can find one from the EC2 Console when launching an instance manually.
You can also follow the official AWS guide here: Finding an AMI
instance_type
→ Here we usedt2.micro
because it is Free Tier eligible and good for learning/demo purposes.vpc_security_group_ids
→ Attached the Security Group we created earlier (written in array format since multiple SGs can be attached).subnet_id
→ Each instance is placed in a different subnet for redundancy.user_data_base64
→ A startup script encoded in base64.We pass two different scripts (
userdata.sh
anduserdata1.sh
) so each instance serves a different HTML page.Later, when we configure the Load Balancer, this helps us verify which backend instance is responding.
8. Application Load Balancer (ALB)
In this step, we are going to configure an Application Load Balancer (ALB) to distribute incoming traffic equally across our two EC2 instances.
resource "aws_lb" "myalb" {
name = "myalb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group.websg.id]
subnets = [aws_subnet.sub1.id, aws_subnet.sub2.id]
tags = {
Name = "web"
}
}
Created an ALB of type
application
(there are also other types like Network and Gateway Load Balancers).Attached it to both subnets so it can reach EC2 instances in different Availability Zones.
Associated it with the security group we created earlier, so the load balancer can accept traffic on the allowed ports
9. Target Group & Attachments
In this step, we will create a Target Group and attach our EC2 instances to it. This allows the Load Balancer to know where to forward the incoming traffic.
resource "aws_lb_target_group" "tg" {
name = "myTG"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.myvpc.id
health_check {
path = "/"
port = "traffic-port"
}
}
resource "aws_lb_target_group_attachment" "attach1" {
target_group_arn = aws_lb_target_group.tg.arn
target_id = aws_instance.webserver1.id
port = 80
}
resource "aws_lb_target_group_attachment" "attach2" {
target_group_arn = aws_lb_target_group.tg.arn
target_id = aws_instance.webserver2.id
port = 80
}
Target Group (
aws_lb_target_group
) → defines where the Load Balancer should send traffic. We’ve set it to accept traffic on port 80 (HTTP).Health Check → configured on path
/
so the ALB continuously monitors the health of our EC2 instances.Target Group Attachments → we attach both EC2 instances (
webserver1
andwebserver2
) to the target group so they can receive traffic from the ALB.Application Load Balancer routing traffic to EC2 instances via Target Group
10. Load Balancer Listener
After creating the Load Balancer and configuring the Target Group with its attachments, the next step is to set up a Listener.
The Listener is responsible for monitoring a specific port and instructing the Load Balancer where to forward incoming traffic. Without a Listener, the Load Balancer has no way of knowing how to route client requests.
resource "aws_lb_listener" "listner" {
load_balancer_arn = aws_lb.myalb.arn
port = 80
protocol = "HTTP"
default_action {
target_group_arn = aws_lb_target_group.tg.arn
type = "forward"
}
}
We attach the Listener to our Load Balancer (
load_balancer_arn
).It listens on port 80 (HTTP).
The default action is to forward traffic to our Target Group, which then routes requests to the healthy EC2 instances.
11. Output: Load Balancer DNS
In our Terraform setup so far, we haven’t made use of variables or outputs for simplicity. As an assignment, you can try refactoring this setup to use variables (for VPC IDs, CIDR blocks, instance types, etc.) and outputs for cleaner, reusable infrastructure code.
Here, we’ll add an output variable that gives us the DNS name of the Load Balancer. This will allow us to easily copy the endpoint and test our deployed webpage.
output "loadbalancerdns" {
value = aws_lb.myalb.dns_name
}
This will print the DNS of the Application Load Balancer in your terminal after
terraform apply
You can then open the DNS URL in your browser to check if your web page is served correctly by the EC2 instances behind the ALB.
Provisioning Resources with Terraform
Now that we are finally done with writing the Terraform file, the next step is to run the following commands:
terraform init
→ Initializes the working directory and downloads required provider plugins.terraform fmt
→ Formats the Terraform code to maintain proper style and readability.terraform plan
→ Previews the execution plan and shows what resources will be created/updated.terraform apply
→ Applies the configuration and creates the resources in your AWS account.
After running terraform apply
, Terraform will provision all resources.
Once completed, you will get the Load Balancer DNS as an output.
Copy this DNS and open it in your browser to test the setup.
Try refreshing the page multiple times or opening it in different tabs.
You should notice the Load Balancer distributing traffic between the two EC2 instances — showing different screens depending on which instance serves the request.
Once you’re done with your project, don’t forget to clean up by running terraform destroy to delete all the resources you created.
And that’s it! You’ve just built a highly available setup with Terraform, load balancers, and multiple EC2 instances. Check out the full project on Github.
Thank you for Reading !
Subscribe to my newsletter
Read articles from Jeet Kansagra directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
