Deploying a Scalable and Highly Available Web Application on AWS with Terraform

Vipin YadavVipin Yadav
5 min read

Introduction

Building a scalable and highly available web application on AWS requires a well-architected infrastructure that can handle traffic spikes, ensure uptime, and optimize costs. This guide walks you through implementing such an architecture using Terraform with a modular approach using necessary networking components as well.

Key AWS Services

  • VPC - Virtual Private Cloud for network infrastructure.

  • EC2 – Virtual servers for hosting the application.

  • Application Load Balancer (ALB) – Distributes traffic across instances.

  • Auto Scaling Group (ASG) – Ensures high availability and automatic scaling.

  • IAM – Manages access and security.

The code used in this blog can be found at https://github.com/vipin0/terraform-code/tree/main/terraform-webapp


Step 1: Set-up network infrastructure

The first step is set-up the required networking infrastructure needed, the following terraform code with create a VPC, public and private subnets, Route tables, Internet Gateway, NAT gateway and the required routes in the route tables.

resource "aws_lb" "alb" {
  name               = "${var.resource_prefix}-alb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb_sg.id]
  subnets            = var.public_subnets
  enable_deletion_protection = var.enable_deletion_protection
  enable_http2             = var.enable_http2
  drop_invalid_header_fields = var.drop_invalid_header_fields
  tags = merge(
    {
      "Name" : "${var.resource_prefix}-ALB"
    },
    var.tags
  )
}
resource "aws_security_group" "alb_sg" {
  name        = "${var.resource_prefix}-alb-sg"
  description = "Allow HTTP/HTTPS Connection to loadbalancer."
  vpc_id      = var.vpc_id

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  ingress {
    cidr_blocks      = var.allowed_http_cidr_blocks
    description      = "Allow HTTP traffic"
    from_port        = 80
    ipv6_cidr_blocks = var.allowed_http_ipv6_cidr_blocks
    protocol         = "TCP"
    to_port          = 80
  }
  ingress {
    cidr_blocks      = var.allowed_https_cidr_blocks
    description      = "Allow HTTPS traffic"
    from_port        = 443
    ipv6_cidr_blocks = var.allowed_https_ipv6_cidr_blocks
    protocol         = "TCP"
    to_port          = 443
  }

  tags = merge(
    {
      "Name" : "${var.resource_prefix}-LB-sg"
    },
    var.tags
  )
}
resource "aws_lb_target_group" "tg" {
  name     = "${var.resource_prefix}-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = var.vpc_id
  health_check {
    path = "/"
  }
  tags = merge(
    {
      "Name" : "${var.resource_prefix}-tg"
    },
    var.tags
  )
}

resource "aws_lb_listener" "http" {
  load_balancer_arn = aws_lb.alb.arn
  port              = 80
  protocol          = "HTTP"
  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.tg.arn
  }
  tags = merge(
    {
      "Name" : "${var.resource_prefix}-LB-Listener"
    },
    var.tags
  )
}

Step 2: Create an IAM Instance Role for EC2

For our EC2 instances to be managed via AWS Systems Manager (SSM) without requiring SSH access, we’ll create an IAM role with the AmazonSSMManagedInstanceCore policy.

resource "aws_iam_role" "ec2_ssm_role" {
  name = "EC2SSMInstanceRole"

  assume_role_policy = jsonencode({
    Version = "2012-10-17"
    Statement = [{
      Effect = "Allow"
      Principal = {
        Service = "ec2.amazonaws.com"
      }
      Action = "sts:AssumeRole"
    }]
  })
}

resource "aws_iam_role_policy_attachment" "ssm_attachment" {
  for_each = {
    "AmazonSSMManagedInstanceCore" = "arn:aws:iam::aws:policy/AmazonSSMManagedInstanceCore"
  }

  role       = aws_iam_role.ec2_ssm_role.name
  policy_arn = each.value
}

resource "aws_iam_instance_profile" "ec2_ssm_profile" {
  name = "EC2SSMInstanceProfile"
  role = aws_iam_role.ec2_ssm_role.name
}

🚀 Why this role? It allows us to connect to EC2 instances using AWS Systems Manager (SSM) Session Manager, eliminating the need for SSH access.


Step 3: Configure the EC2 Launch Template

Next, we create a Launch Template to ensure that all instances in the Auto Scaling Group use the same configuration.

Finding the AMI Id using terraform:

This terraform code with search and find the latest AMI ID for Ubuntu-24.04 server and with ARM architecture.

data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"]  # Canonical's official AWS account ID for Ubuntu

  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd-gp3/ubuntu-*-24.04-*"]  # Adjusted for Ubuntu 24.04 ARM
  }

  filter {
    name   = "architecture"
    values = ["arm64"]
  }

  filter {
    name   = "root-device-type"
    values = ["ebs"]
  }

  filter {
    name   = "virtualization-type"
    values = ["hvm"]
  }
}

Creating Launch Template using terraform:

Next step is to create the launch template using the above created instance role and AMI ID and a security group for the launch templates.

resource "aws_security_group" "web_sg" {
  name        = "${var.resource_prefix}-web-sg"
  description = "Allow HTTP/HTTPS Connection to webserver."
  vpc_id      = var.vpc_id

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }

  ingress {
    from_port   = 80
    to_port     = 80
    protocol    = "TCP"
    description = "Allow HTTP traffic"
    security_groups = [var.alb_security_group_id]
  }

  tags = merge(
    {
      "Name" : "${var.resource_prefix}-LB-sg"
    },
    var.tags
  )
}

resource "aws_launch_template" "web" {
  name_prefix            = "${var.resource_prefix}-web-server"
  image_id               = data.aws_ami.ubuntu.image_id
  instance_type          = var.instance_type
  user_data              = var.user_data
  vpc_security_group_ids = [aws_security_group.web_sg.id]

  iam_instance_profile {
    arn = aws_iam_instance_profile.ec2_ssm_profile.arn
  }

  tags = merge(
    {
      "Name" : "${var.resource_prefix}-lt"
    },
    var.tags
  )
}

Step 4: Create an Auto Scaling Group

An Auto Scaling Group (ASG) ensures high availability by launching instances across multiple Availability Zones (AZs).

locals {
  asg_tags = merge(
    {
      "Name" : "${var.resource_prefix}-web-server"
    },
    var.tags
  )
}

resource "aws_autoscaling_group" "asg" {
  name                = "${var.resource_prefix}-asg"
  vpc_zone_identifier = var.private_subnets
  desired_capacity    = var.asg_desired_capacity
  max_size            = var.asg_max_size
  min_size            = var.asg_min_size
  launch_template {
    id      = aws_launch_template.web.id
    version = "$Latest"
  }
  target_group_arns = [var.target_group_arn]

    dynamic "tag" {
    for_each = local.asg_tags
    content {
      key                 = tag.key
      value               = tag.value
      propagate_at_launch = true
    }
  }
}

Step 5: Configure an Application Load Balancer

To distribute incoming traffic, we set up an Application Load Balancer (ALB) and configured the listeners that would forward the incoming traffic to target group instances.

resource "aws_lb" "alb" {
  name               = "${var.resource_prefix}-alb"
  internal           = false
  load_balancer_type = "application"
  security_groups    = [aws_security_group.alb_sg.id]
  subnets            = var.public_subnets
  enable_deletion_protection = var.enable_deletion_protection
  enable_http2             = var.enable_http2
  drop_invalid_header_fields = var.drop_invalid_header_fields
  tags = merge(
    {
      "Name" : "${var.resource_prefix}-ALB"
    },
    var.tags
  )
}
resource "aws_security_group" "alb_sg" {
  name        = "${var.resource_prefix}-alb-sg"
  description = "Allow HTTP/HTTPS Connection to loadbalancer."
  vpc_id      = var.vpc_id

  egress {
    from_port        = 0
    to_port          = 0
    protocol         = "-1"
    cidr_blocks      = ["0.0.0.0/0"]
    ipv6_cidr_blocks = ["::/0"]
  }
  ingress {
    cidr_blocks      = var.allowed_http_cidr_blocks
    description      = "Allow HTTP traffic"
    from_port        = 80
    ipv6_cidr_blocks = var.allowed_http_ipv6_cidr_blocks
    protocol         = "TCP"
    to_port          = 80
  }
  ingress {
    cidr_blocks      = var.allowed_https_cidr_blocks
    description      = "Allow HTTPS traffic"
    from_port        = 443
    ipv6_cidr_blocks = var.allowed_https_ipv6_cidr_blocks
    protocol         = "TCP"
    to_port          = 443
  }

  tags = merge(
    {
      "Name" : "${var.resource_prefix}-LB-sg"
    },
    var.tags
  )
}
resource "aws_lb_target_group" "tg" {
  name     = "${var.resource_prefix}-tg"
  port     = 80
  protocol = "HTTP"
  vpc_id   = var.vpc_id
  health_check {
    path = "/"
  }
  tags = merge(
    {
      "Name" : "${var.resource_prefix}-tg"
    },
    var.tags
  )
}

resource "aws_lb_listener" "http" {
  load_balancer_arn = aws_lb.alb.arn
  port              = 80
  protocol          = "HTTP"
  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.tg.arn
  }
  tags = merge(
    {
      "Name" : "${var.resource_prefix}-LB-Listener"
    },
    var.tags
  )
}

Conclusion

With this Terraform setup, we have built a scalable, highly available web application using AWS best practices. The infrastructure includes:

Auto Scaling for handling traffic spikes and autoscaling the instances.
Application Load Balancer for distributing requests
IAM Role with SSM for secure access

0
Subscribe to my newsletter

Read articles from Vipin Yadav directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vipin Yadav
Vipin Yadav

I am DevOps/Full-Stack developer from India. I am currently learning and working on Kubernetes and FullStack Development.