Scalable Web Infrastructure with Terraform, EC2 Autoscaling, ALB, NGINX on AWS, and Remote State Management with Terraform Cloud.

Welcome to a new Project in my DevOps Journey Series, where we move beyond basic deployments and start building scalable, fault-tolerant infrastructure using Terraform and AWS.

In this project, we’ll provision a production-grade web environment on AWS that:

Automatically scales EC2 instances based on demand.

Balances traffic across instances using an Application Load Balancer (ALB)

Deploys NGINX on each instance to serve traffic

Also Push our clean codes to GitHub.

Finally Move our Terraform State to Terraform Cloud

Every line of Terraform code, configuration file, and deployment strategy in this project has been co-designed, reviewed, and generated with the help of my DevOps partner ChatGPT (acting as my senior cloud architect). This means the architecture, scripts, and flow have been:

i. Verified for accuracy

ii. Explained in plain English

iii. Built for beginner-friendly learning and real-world DevOps interviews

Whether you’re new to Terraform or brushing up for a DevOps role, this guide will give you hands-on experience deploying a cloud-based web stack that scales, balances traffic, and automates server configuration.

Prerequisites

To follow along with this project or deploy it yourself, you’ll need the following:

Local Setup

i. VSCODE Installed on Local Host

ii. HashiCorp Terraform Extension Installed on VSCODE

iii. AWS CLI v2+ installed and configured

Basic knowledge of: Terraform syntax and workflow (init, plan, apply) AWS networking (VPC, subnets, EC2)

AWS Resources Required

Make sure your AWS account has the following already set up:

i. A key pair (for SSH access to EC2)

ii. An existing VPC (you can use the default or a custom one)

iii. At least 2 public subnets in the same region (for ALB and autoscaling group)

iv. IAM permissions to create EC2, ALB, Launch Templates, ASGs, etc.

Files We’ll Work With:

i. main.tf – defines all resources: ALB, ASG, EC2, NGINX, etc.

ii. variables.tf – declares configurable inputs

iii. terraform.tfvars – supplies real values (e.g., AMI ID, subnets)

iv. outputs.tf – displays DNS name and other key info

v. README.md – for project documentation

State Management with Terraform Cloud

In this project, we’re not keeping our Terraform state file (terraform.tfstate) locally. Instead, we’re using Terraform Cloud to store it remotely.

Why?

It’s secure (no lost or corrupted .tfstate files)

It supports team collaboration

It enables safe concurrent operations

This means every apply, plan, or destroy action is tracked, versioned, and safe from local machine crashes.

What You’ll Learn

By completing this project, you’ll understand how to:

i. Use Terraform to automate complex AWS deployments

ii. Configure Autoscaling Groups with Launch Templates

iii. Set up Application Load Balancers and Target Groups

iv. Use user_data to install and configure NGINX automatically on launch

v. Manage reusable and scalable infrastructure as code

vi. How to migrate Terraform state to Terraform Cloud

Theory Corner: DevOps Concepts Behind the new Project.

  1. Launch Template

Think of a launch template like a recipe. It tells AWS exactly how to create an EC2 instance, what AMI to use, what instance type, what security group, what startup script, etc.

Instead of manually setting up each EC2 instance, we give AWS a template and say, “Create copies like this whenever needed.”

In this project, our launch template installs NGINX automatically using a shell script (user_data).

  1. Auto Scaling Group (ASG)

An Auto Scaling Group is like a smart team manager for EC2. It decides:

How many servers should be running

When to add more (e.g., during traffic spikes)

When to remove some (e.g., during low usage)

You define min, max, and desired capacity, and AWS handles the rest.

Here, we tell it to always keep 2 instances running, and allow scaling up to 3 or down to 1.

  1. Application Load Balancer (ALB)

A load balancer is like a traffic cop at a busy intersection. It watches incoming traffic and distributes it evenly across multiple EC2 instances.

This prevents any single server from getting overwhelmed and ensures your website stays responsive, even under load.

In our setup, the ALB forwards HTTP traffic to all healthy EC2 instances managed by the ASG.

  1. Target Group

The target group is a list of EC2 instances that the load balancer sends traffic to.

It also does health checks to make sure traffic only goes to servers that are up and running properly.

Our target group checks that NGINX responds on port 80 with a status code 200 (OK).

  1. Security Group

A security group is like a virtual firewall. It controls what traffic is allowed to go in or out of your EC2 instance or load balancer.

We allow:

Port 80 → for web traffic

Port 22 → for SSH (optional, only if you plan to SSH in)

  1. User Data Script

This is a special startup script that runs when an EC2 instance first boots up. It’s how we automate installing NGINX without logging in manually.

This is how DevOps makes servers “just work” the moment they launch.

  1. Terraform Cloud (Remote State Management)

When you run terraform apply, Terraform keeps track of all your resources in a file called terraform.tfstate.

This file is like Terraform’s “memory” it knows what you’ve already deployed and what needs to change.

By default, Terraform stores this file locally, but that’s risky.

In this project, we configure a Terraform Cloud workspace to handle this state file remotely and safely.

Benefits:

Automatically tracks changes across deployments

Supports versioning, so you can roll back if needed

Safe for teams and CI/CD automation

Think of it as “GitHub for your Terraform memory” a safe place where Terraform remembers what it built.

Lets Get Started.

Step 1. Set Up the Project in VS Code

After Unzipping our Project folder named " terraform_ec2_project_updated_clean" , then we will open the unzipped folder in VS CODE File → Open Folder → Select the unzipped project

The main.tf file is already configured to handle all infrastructure. You do not need to manually edit this file unless you want to customize the NGINX message or autoscaling settings.

Instead, focus on updating the terraform.tfvars file with your AWS-specific values. This keeps your project clean, dynamic, and beginner-friendly.

For clarity i have included the screenshot of our configuration file for main.tf

Quick Note: To ensure our custom NGINX welcome message appears, we use sleep 5 to delay the script just enough to let NGINX start and generate its default HTML directory.
Then we overwrite index.html with our custom message.

Step 2. Editing the file terraform.tfvars.

We will now edit our file named " terraform.tfvars", i know a lot of my readers will be wondering why do we have to edit our terraform.tfvars file, reason is because it is the file where you plug in the real values (like region, AMI ID, VPC ID) that your main.tf infrastructure code depends on.

When you're using Terraform, you want your .tf files (like main.tf) to be dynamic and reusable, not hardcoded. instead of plugging all your values inside main.tf , you define what those variables actually are in terraform.tfvars and keep the main.tf file dynamic and reusable in the future.

So for the Course of this project below are the values we will edit in our terraform.tfvars file:

aws_region = "us-east-1" (sticking with default)

ami_id = "ami-037f5f4560579afb2" (Amazon Linux 2 AMI 64-bit)

instance_type = "t2.micro"

key_name = "techbro-keypair" (name we are giving our keypair in the terraform config)

public_key_path = "C:/Users/fred4/Downloads/techbro-keypair.pub" ( Path to your SSH public key generated via powershell on your localhost)

vpc_id = "vpc-0f6db336a1288ab10" (created and gotten from my AWS console)

subnet_ids = ["subnet-02bca1fb4e9566932", "subnet-066980db03f71145f"] ( make sure it has already been created in your AWS console and matches the VPC ID you are using)

Please Note: When working on Windows, if you copy a file path (like your .pub SSH key) from File Explorer, it usually looks like this: C:\Users\YourName\Downloads\your-key.pub but Terraform doesn’t like backslashes (\) Backslashes can break your config or cause weird errors ,You need to convert it to forward slashes (/) like this: C:/Users/YourName/Downloads/your-key.pub OR Or escape the backslashes like this: C:\\Users\\YourName\\Downloads\\your-key.pub , But using forward slashes is easier and cleaner.

Command to check for the latest AMI available for region us-east-1 is :

aws ec2 describe-images
--owners amazon
--region us-east-1
--filters "Name=name,Values=amzn2-ami-hvm-
-x86_64-gp2"
--query 'Images[
].[ImageId,CreationDate]'
--output text | sort -k2 -r | head -n 1

We don’t need to touch variables.tf or outputs.tf.

variables.tf already declares all expected inputs.

outputs.tf is set up to display useful info after deployment.

All our real edits happen in terraform.tfvars, keeping things clean and beginner-friendly.

I will also include screenshots of our variables.tf or outputs.tf file to have a visual of what values are sitting in there.

Variable.tf we have this screenshots.

Then output.tf we have this:

Additional Note: Why We Use the Load Balancer DNS to Access Our App.

In this project, we're deploying multiple EC2 instances using an Auto Scaling Group, not just a single server. That means there's no fixed IP address or single instance to connect to.

Instead, we use an Application Load Balancer (ALB) to:

  1. Automatically distribute traffic across all healthy EC2 instances

  2. Provide a single, stable public endpoint (DNS) for our application

  3. Enable seamless scaling (up/down) without changing URLs

the load balancer keeps your infrastructure resilient. If one instance fails, the ALB will route traffic to others automatically — keeping your app alive without downtime.

Step 3: Initialize the Terraform Project

On your VS CODE click on the ... on the top left and then click terminal to open the terminal to run our first command which is " terraform init " this command will Downloads the required Terraform provider (AWS in this case), Prepares the working directory, Validates your Terraform configuration syntax.

What Just Happened? Running terraform init does a one-time setup that prepares your project to use the required providers (like AWS).

It also creates a .terraform.lock.hcl file which we can see in the screenshot above, this ensures consistent versions are used every time you run your project across your machine or teammates'.

The .terraform.lock.hcl file should be committed to GitHub.

Do not delete or edit it manually, Terraform handles it for you.

You'll only ever re-run terraform init if:

You add a new provider or

You change backend configuration (e.g., switching to Terraform Cloud) Which we will do later as this project progresses.

Step 4: Terraform Plan

What is the purpose of running this command " terraform plan "
Validates your .tf files + terraform.tfvars, Connects to AWS to preview the infrastructure changes, tells us What resources it will create, What it will destroy (if anything).

In one word the command previews what Terraform will create in your AWS account.

It’s the safest way to verify everything is correctly configured before making changes

Always run this before terraform apply, so if there is any error in the terraform configuration it will capture it , before you run terraform apply.

I will include below a screenshot of a successful ' terraform plan " command .

Step 5: Deploy Infrastructure with terraform apply

What this step does: Creates all the resources shown in the plan, VPC-related attachments, Security groups, Launch template, Auto Scaling Group, Application Load Balancer (ALB)

Sets everything up so you can visit your NGINX site in the browser

So we will run this command "terraform apply " Terraform will re-run the plan and then ask: Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve Type : YES

What to Expect After Success: Terraform Apply Output Message and also the ALB DNS name.

You can copy that DNS name and paste it in your browser to test the deployment. If everything is working — you'll see the NGINX welcome page served from your EC2 instances behind the ALB.

Running terraform apply is the moment where infrastructure goes live. Terraform talks to AWS and begins creating all the resources — exactly as planned. You only approve it if the plan looks right.

To Verify everything was successfully created , head to AWS console and manually verify the 2 Instances created are running, also Locate the target group to check for the health status of the Ec2 instances, then also Locate the Load balancing Tab to verify your load balancer, also for Autoscaling locate the autoscaling tab to verify everything is in good shape.

Step 6. Push Your Project to GitHub (Version Control & Collaboration Ready)

This Step helps track changes to your Terraform code, showcase our infrastructure work publicly i.e on GitHub and also enable future CI/CD and automation

Next thing we have to do now is head to GitHub official website and login to our GitHub account , then create a new repository which i will name " terraform-ec2-asg-alb-nginx " no need to initialize with README (you already have one locally, )

Once Created , you will have something like this in the screenshot below.

Next thing we will initialize Git in our project folder i.e git init , then also create a .gitignore file on root folder in VS Code to avoid pushing sensitive or unnecessary files.

Paste in this content.

.terraform/

.tfstate

.tfstate.backup

.terraform.lock.hcl

.pem

.pub

terraform.tfvars

This Protects :

Your Terraform state files (which contain sensitive info)

Your .pem private keys

The .terraform directory

Next is to Run commands to Push our code to GitHub ,

First command to run is " git init " this turns an ordinary folder into a Git repository, then Creates a hidden .git/ directory inside your current folder

It’s the very first step when you want Git to start keeping track of your files. Without it, Git has no context for your project.

Then we run this next command " git add . " this command Stages or prepares all current changes in your folder for the next commit.

What happens:

The (.) means “all files and sub-directories in the current directory.”

Any new files, modifications, or deletions you’ve made get marked as “ready to be committed.”

Git moves those changes into an intermediate area called the “staging area” (or “index”).

Why it matters:

Git doesn’t automatically commit everything in your folder. Staging lets you choose exactly what goes into your next snapshot. Using git add . is a quick way to say, “I want every tracked and untracked file right now to be included.

Please Note: Due to the fact that we have added some specific files inside our .gitignore those do not get committed and pushed only our README.md, main.tf, variables.tf and outputs.tf will get pushed to our repo.

Then we also run this command git commit -m "Initial commit: Terraform EC2 + ASG + ALB + NGINX project"

What this mean?

this command Creates a new commit (a saved snapshot) containing everything you’ve staged, and attaches a descriptive message.

Git looks at the staging area, bundles those changes into a new commit object, and links it into the commit history.

The -m flag lets you write the commit message inline—in this case, "Initial commit: Terraform EC2 + ASG + ALB + NGINX project"

That message is how you (or anyone else looking at the history) know what changed or why.

Why this Command matters is that Commits are like checkpoints.

Each one has a unique ID and a timestamp, also The message should ideally summarize the intent ie Initial commit: Terraform EC2 + ASG + ALB + NGINX project.

Step 7. Connect Your Local Project to GitHub

This step we will run this command " git remote add origin https://github.com/stillfreddie/terraform-ec2-asg-alb-nginx.git "

What it does:

This command tells Git that you want to connect your local project to a remote repository hosted on GitHub. i.e the repository we created earlier on GitHub.

origin is just a nickname for the URL of your GitHub repo.

This means every time you use git push origin, Git knows you're talking to GitHub.

Then we run the next command " git branch -M main "

What it does:

This command renames your current branch (usually master by default) to main, which is now the standard naming convention for GitHub's default branches.

Why this matters:

GitHub uses main as the default branch for new repository.

Using -M forces the rename, even if a main branch already exists.

Note: After running this command on your terminal you do not get any output , not getting any output shows the command ran successfully.

Then finally to Push the code to our GitHub repository we created , run this command "git push -u origin main "

What it does: This command uploads (pushes) your local project code to the remote repository (GitHub).

Breakdown:

push: Send your committed code to GitHub

-u: Set upstream tracking, so future pushes can just use git push

origin: Push to the remote named origin (the GitHub URL)

main: Push the main branch

Finally, head back to your GitHub account and verify if your code was pushed to the repository successfully, you will have something like this from the screenshot below.

Step 8 Migrating Terraform State to Terraform Cloud

Why do we need to Move to Terraform Cloud?

When you run terraform apply locally, the state (.tfstate) file is saved on your machine. That’s risky because:

It can be lost or overwritten

It can’t be safely shared across team members

It contains sensitive info (like instance IPs, keys, etc.)

Terraform Cloud stores your state securely and centrally, with:

i. Locking to avoid conflicts

ii. Automatic backups

iii. Team collaboration support

Step-by-Step: Connect to Terraform Cloud (Manual Workflow)

Login to Terraform Cloud from Terminal ( for this please refer to my previous project on How to move tfstate to tf cloud)

run the command on terminal " terraform login "

This will:

Open a browser window

Ask for you to create a Terraform Cloud token , once created copy the token and save it

Authenticate your CLI with Terraform Cloud

Please Note; When Authenticating, that is pasting your token on terminal the output value of the token will not be visible, for security reasons , once you right click just press enter, if the token is valid it will authenticate your account.

Step 9 : Create a Workspace in Terraform Cloud

Go to https://app.terraform.io

Under your organization Terraform_CloudORG, click “New Workspace”

Choose CLI-Driven Workflow

Name the workspace: terraform-ec2-asg-alb-nginx

Step 10: Update Your main.tf with Backend Configuration

At the top of your main.tf, paste this block:

terraform {

backend "remote" {

organization = "Terraform_CloudORG"

workspaces {

name = "terraform-ec2-asg-alb-nginx"

}

}

}

This tells Terraform to:

Use Terraform Cloud as your backend

Store state under the workspace you created

It should look like this Below Screenshot in your main.tf file.

Step 10. Terraform Cloud Integration

Terraform Cloud Integration & Script Changes

To make our Terraform configuration work with Terraform Cloud, we needed to update a few parts of the main.tf file and remove any references to local-only resources. Here's what we changed:

  1. Removed Local File Reference for SSH Key : Originally, the aws_key_pair block looked like this: public_key = file(var.public_key_path)

This caused an error in Terraform Cloud because it cannot access files on your local machine

Solution: We replaced that line with the actual public key content directly, like this:

public_key = "MAIN CONTENT INSIDE THE techbro-keypair.pub"

Now, the key is embedded directly in the Terraform script, making it cloud-compatible.

  1. Set Up AWS Credentials in Terraform Cloud

Since Terraform Cloud doesn’t have access to your local AWS CLI config, we added our AWS credentials directly inside the Terraform Cloud workspace:

Environment Variable 1: Key = AWS_ACCESS_KEY_ID

Value = your actual AWS Access Key ID

Environment Variable 2: Key = AWS_SECRET_ACCESS_KEY

Value = your actual AWS Secret Key

Then we Marked as Sensitive.

This allows Terraform Cloud to authenticate and deploy resources to AWS on your behalf.

  1. Ran Terraform Remotely

After these updates, we initialized and ran the plan/apply steps in Terraform Cloud:

terraform init

terraform plan

terraform apply

Terraform Cloud handled all provisioning, and our ALB + EC2 autoscaling infrastructure was deployed successfully.

Note: By embedding your SSH public key directly and configuring your AWS credentials in Terraform Cloud, you make your infrastructure code portable, automated, and cloud-native perfect for team environments or remote DevOps workflows.

Step 11. How to Verify Terraform State Was Successfully Moved to Terraform Cloud

To confirm that our Terraform state file has been successfully moved to Terraform Cloud, follow these steps:

Head over to your Terraform Cloud dashboard

Click on your workspace: terraform-ec2-asg-alb-nginx

On the left-hand side of the workspace page, click on “States”

This section will show you the current and historical state versions being managed by Terraform Cloud. If you see your infrastructure state listed there.

congrats, your state is no longer stored locally and has been moved to the cloud successfully!

Closing Note

In this project, we built a fully scalable and production-ready web infrastructure using Terraform. We leveraged key AWS services like EC2 Autoscaling, Application Load Balancer (ALB), and NGINX, all orchestrated through Infrastructure as Code (IaC).

What makes this build even more powerful is our integration with Terraform Cloud, allowing us to manage our infrastructure securely and collaboratively — with full remote state management, centralized runs, and cloud-based automation.

From writing the first line of code to seeing our web app live on a browser, every piece of infrastructure was versioned, automated, and deployed with precision with the Help of my Dev partner ChatGPT.

Whether you’re a beginner or aiming for advanced DevOps workflows, this project gives you a solid foundation to scale from.

Thank you and see you on the next one.

0
Subscribe to my newsletter

Read articles from Stillfreddie Techman directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Stillfreddie Techman
Stillfreddie Techman