πŸš€ Day 06 – Building Your Own VPC, Deploying EC2, and Running Nginx with Terraform

Abdul RaheemAbdul Raheem
4 min read

Today’s topic is one of the most exciting milestones in my Terraform journey:
πŸ‘‰ We are no longer using AWS defaults. Instead, we’re building our own Virtual Private Cloud (VPC), launching an EC2 instance inside it, and running a web server (Nginx) automatically using User Data.

This is a big step toward real-world infrastructure automation.


πŸ—οΈ What We’re Building

  • A VPC with private and public subnets.

  • An Internet Gateway and a Route Table for connectivity.

  • A Security Group (like a firewall) to control traffic.

  • An EC2 instance that runs Nginx automatically.

  • Terraform outputs that give us the instance’s public IP and a ready-to-use URL.


πŸ“‚ Project Structure

For clean code practice, I separated resources into different files:

β”œβ”€β”€ main.tf
β”œβ”€β”€ providers.tf
β”œβ”€β”€ vpc.tf
β”œβ”€β”€ ec2.tf
β”œβ”€β”€ security-group.tf
β”œβ”€β”€ outputs.tf

This makes the project modular and easy to maintain.


βš™οΈ Provider Configuration

providers.tf

provider "aws" {
  region = "ap-south-1"
}

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "6.9.0"
    }
  }
}

Here we specify:

  • AWS provider β†’ tells Terraform we’re working with AWS.

  • Region β†’ Mumbai (ap-south-1).

  • Provider version β†’ locks the AWS provider to prevent compatibility issues.


🌐 VPC Setup

vpc.tf

# Create VPC
resource "aws_vpc" "my_vpc" {
  cidr_block = "10.0.0.0/16"
  tags = { Name = "my_vpc" }
}

# Private subnet
resource "aws_subnet" "private-subnet" {
  cidr_block = "10.0.1.0/24"
  vpc_id     = aws_vpc.my_vpc.id
  tags = { Name = "private-subnet" }
}

# Public subnet
resource "aws_subnet" "public-subnet" {
  cidr_block              = "10.0.2.0/24"
  vpc_id                  = aws_vpc.my_vpc.id
  map_public_ip_on_launch = true
  tags = { Name = "public-subnet" }
}

# Internet gateway
resource "aws_internet_gateway" "my-igw" {
  vpc_id = aws_vpc.my_vpc.id
  tags = { Name = "my-igw" }
}

# Routing table
resource "aws_route_table" "my-rt" {
  vpc_id = aws_vpc.my_vpc.id
  route {
    cidr_block = "0.0.0.0/0"
    gateway_id = aws_internet_gateway.my-igw.id
  }
}

resource "aws_route_table_association" "public-sub" {
  route_table_id = aws_route_table.my-rt.id
  subnet_id      = aws_subnet.public-subnet.id
}

πŸ‘‰ This builds a custom network instead of relying on AWS defaults.

  • CIDR block 10.0.0.0/16 β†’ IP range of the VPC.

  • Subnets β†’ one private, one public.

  • Internet Gateway + Route Table β†’ gives internet access to the public subnet.


πŸ” Security Groups (Your Virtual Firewall)

security-group.tf

resource "aws_security_group" "nginx-sg" {
  vpc_id = aws_vpc.my_vpc.id

  # Inbound rule for HTTP
  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  # Outbound rule
  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"
    cidr_blocks = ["0.0.0.0/0"]
  }

  tags = { Name = "nginx-sg" }
}

πŸ“ Explanation:

  • Ingress rule β†’ allows anyone on the internet (0.0.0.0/0) to access port 80 (HTTP).

  • Egress rule β†’ allows the instance to connect to the internet (needed for downloading Nginx).

πŸ‘‰ Think of a Security Group like a bouncer at a nightclub:

  • Ingress = who is allowed in.

  • Egress = who can go out.

Without it, your server is either too locked down (nobody gets in) or too open (anyone can attack).


πŸ–₯️ EC2 Instance with Nginx

ec2.tf

resource "aws_instance" "nginxserver" {
  ami                         = "ami-0144277607031eca2" # Amazon Linux 2 AMI
  instance_type               = "t2.micro"
  subnet_id                   = aws_subnet.public-subnet.id
  vpc_security_group_ids      = [aws_security_group.nginx-sg.id]
  associate_public_ip_address = true

  user_data = <<-EOF
              #!/bin/bash
              sudo yum install nginx -y
              sudo systemctl start nginx
              EOF

  tags = { Name = "NginxServer" }
}

πŸ“ Explanation:

  • AMI β†’ Amazon Linux 2 image.

  • t2.micro β†’ free-tier instance type.

  • Subnet β†’ launches in the public subnet.

  • Security Group β†’ attaches firewall rules.


⚑ User Data (Automation at Boot)

The magic happens here:

#!/bin/bash
sudo yum install nginx -y
sudo systemctl start nginx

This script runs once when the server starts:

  1. Installs Nginx.

  2. Starts the web server.

πŸ‘‰ Imagine User Data as a chef who prepares the food before the restaurant opens.
When you walk in (visit the IP), the meal (website) is already ready.

This makes deployments automatic, repeatable, and scalable.


πŸ“€ Outputs

outputs.tf

output "instance_public_ip" {
  description = "The public IP address of the EC2 instance"
  value       = aws_instance.nginxserver.public_ip
}

output "instance_url" {
  description = "The URL to access the Nginx server"
  value       = "http://${aws_instance.nginxserver.public_ip}"
}

After terraform apply, you instantly see:

βœ… EC2 Public IP
βœ… Ready-to-use URL (http://<public_ip>)

No guessing, no console copy-pasting.


πŸŽ‰ Final Result

  • A custom VPC with proper networking.

  • An EC2 instance running Nginx automatically.

  • Security groups managing inbound and outbound traffic.

  • Cleanly separated Terraform code for clarity.

  • Outputs giving you a working website link immediately.

This is how real-world DevOps infrastructure is built:

  • Secure,

  • Automated,

  • and Modular.


πŸ”— Follow My Journey

πŸ“– Blogs: Hashnode
πŸ’» Code: GitHub
🐦 Updates: X (Twitter)

0
Subscribe to my newsletter

Read articles from Abdul Raheem directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Abdul Raheem
Abdul Raheem

Cloud DevOps | AWS | Terraform | CI/CD | Obsessed with clean infrastructure. Cloud DevOps Engineer πŸš€ | Automating Infrastructure & Securing Pipelines | Bridging Gaps Between Code and Cloud ☁️ I’m on a mission to master DevOps from the ground upβ€”building scalable systems, automating workflows, and integrating security into every phase of the SDLC. Currently working with AWS, Terraform, Docker, CI/CD, and learning the art of cloud-native development.