Terraform Your Way to High Availability: Deploying a Full Stack AWS Architecture
Table of contents
- Introduction to Terraform and High Availability
- Setting Up Your Terraform Environment
- Defining the VPC
- Setting Up an Internet Gateway and NAT Gateway
- Creating Security Groups
- Deploying EC2 Instances
- Implementing an Elastic Load Balancer (ELB)
- Configuring Auto Scaling Groups
- Setting Up a Highly Available Database with Amazon RDS
- Deploying CloudWatch for Monitoring and Alerts
- Conclusion
We will look at how to use the open-source infrastructure-as-code tool Terraform to do this in this extensive guide. For companies looking to offer dependable services, putting in place a strong, highly available architecture on AWS is essential. Ensuring that every component is designed for high availability, we'll cover everything from setting up a Virtual Private Cloud (VPC) to deploying an Elastic Load Balancer (ELB), Auto Scaling Groups (ASGs), and a relational database.
Introduction to Terraform and High Availability
Terraform is a powerful tool developed by HashiCorp that allows you to define, preview, and deploy cloud infrastructure using a high-level configuration language. It supports multiple cloud providers, including AWS, Azure, Google Cloud, and many more, making it a versatile choice for infrastructure management.
High availability (HA) refers to systems that are dependable and operate continuously without failing for a long period. In the context of AWS, this involves setting up architectures that can withstand failures by distributing workloads across multiple Availability Zones (AZs) and ensuring redundancy at all levels.
Setting Up Your Terraform Environment
Before we dive into creating AWS resources with Terraform, let's set up our environment:
- Install Terraform: Start by installing Terraform on your local machine. You can download the appropriate version for your operating system from the Terraform download page.
# For MacOS
brew install terraform
# For Windows
choco install terraform
- Configure AWS CLI: Ensure you have the AWS CLI installed and configured with your AWS credentials. This will allow Terraform to interact with your AWS account.
# Install AWS CLI
pip install awscli
# Configure AWS CLI
aws configure
Defining the VPC
A Virtual Private Cloud (VPC) is the foundational component of your AWS infrastructure. It provides a logically isolated network that you can launch your resources into. Below is the Terraform configuration for creating a VPC with multiple subnets for high availability.
provider "aws" {
region = "us-east-1"
}
resource "aws_vpc" "main" {
cidr_block = "10.0.0.0/16"
enable_dns_support = true
enable_dns_hostnames = true
tags = {
Name = "main-vpc"
}
}
resource "aws_subnet" "public_subnet_1" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "public-subnet-1"
}
}
resource "aws_subnet" "public_subnet_2" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1b"
map_public_ip_on_launch = true
tags = {
Name = "public-subnet-2"
}
}
resource "aws_subnet" "private_subnet_1" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.3.0/24"
availability_zone = "us-east-1a"
tags = {
Name = "private-subnet-1"
}
}
resource "aws_subnet" "private_subnet_2" {
vpc_id = aws_vpc.main.id
cidr_block = "10.0.4.0/24"
availability_zone = "us-east-1b"
tags = {
Name = "private-subnet-2"
}
}
This configuration defines a VPC with a /16 CIDR block, two public subnets in different Availability Zones for high availability, and two private subnets. This setup allows us to distribute our resources across multiple AZs to ensure redundancy.
Setting Up an Internet Gateway and NAT Gateway
To allow internet access to the instances in our public subnets and restrict internet access to instances in our private subnets, we need to set up an Internet Gateway (IGW) and a NAT Gateway.
resource "aws_internet_gateway" "igw" {
vpc_id = aws_vpc.main.id
tags = {
Name = "main-igw"
}
}
resource "aws_route_table" "public_rt" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.igw.id
}
tags = {
Name = "public-route-table"
}
}
resource "aws_route_table_association" "public_subnet_1_assoc" {
subnet_id = aws_subnet.public_subnet_1.id
route_table_id = aws_route_table.public_rt.id
}
resource "aws_route_table_association" "public_subnet_2_assoc" {
subnet_id = aws_subnet.public_subnet_2.id
route_table_id = aws_route_table.public_rt.id
}
resource "aws_eip" "nat_eip" {
vpc = true
}
resource "aws_nat_gateway" "nat_gw" {
allocation_id = aws_eip.nat_eip.id
subnet_id = aws_subnet.public_subnet_1.id
tags = {
Name = "nat-gateway"
}
}
resource "aws_route_table" "private_rt" {
vpc_id = aws_vpc.main.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.nat_gw.id
}
tags = {
Name = "private-route-table"
}
}
resource "aws_route_table_association" "private_subnet_1_assoc" {
subnet_id = aws_subnet.private_subnet_1.id
route_table_id = aws_route_table.private_rt.id
}
resource "aws_route_table_association" "private_subnet_2_assoc" {
subnet_id = aws_subnet.private_subnet_2.id
route_table_id = aws_route_table.private_rt.id
}
Creating Security Groups
Security groups act as virtual firewalls for your instances, controlling inbound and outbound traffic. Below is a sample Terraform configuration for creating security groups for public and private instances:
resource "aws_security_group" "public_sg" {
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["0.0.0.0/0"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "public-sg"
}
}
resource "aws_security_group" "private_sg" {
vpc_id = aws_vpc.main.id
ingress {
from_port = 80
to_port = 80
protocol = "tcp"
cidr_blocks = ["10.0.0.0/16"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
tags = {
Name = "private-sg"
}
}
Deploying EC2 Instances
Next, we will deploy EC2 instances in our public and private subnets. The public EC2 instance could serve as a bastion host for SSH access, while the private EC2 instances could run application servers.
resource "aws_instance" "bastion" {
ami = "ami-0c55b159cbfafe1f0" # Amazon Linux 2 AMI (HVM)
instance_type = "t2.micro"
subnet_id = aws_subnet.public_subnet_1.id
security_groups = [aws_security_group.public_sg.name]
tags = {
Name = "bastion-host"
}
}
resource "aws_instance" "app_server_1" {
ami = "ami-0c55b159cbfafe1f0" # Amazon Linux 2 AMI (HVM)
instance_type = "t2.micro"
subnet_id = aws_subnet.private_subnet_1.id
security_groups = [aws_security_group.private_sg.name]
tags = {
Name = "app-server-1"
}
}
resource "aws_instance" "app_server_2" {
ami = "ami-0c55b159cbfafe1f0" # Amazon Linux 2 AMI (HVM)
instance_type = "t2.micro"
subnet_id = aws_subnet.private_subnet_2.id
security_groups = [aws_security_group.private_sg.name]
tags = {
Name = "app-server-2"
}
}
Implementing an Elastic Load Balancer (ELB)
To distribute traffic across multiple EC2 instances, we will set up an Elastic Load Balancer (ELB).
resource "aws_lb" "app_lb" {
name = "app-lb"
internal = false
load_balancer_type = "application"
security_groups = [aws_security_group
.public_sg.id]
subnets = [
aws_subnet.public_subnet_1.id,
aws_subnet.public_subnet_2.id
]
tags = {
Name = "app-lb"
}
}
resource "aws_lb_target_group" "app_tg" {
name = "app-tg"
port = 80
protocol = "HTTP"
vpc_id = aws_vpc.main.id
health_check {
interval = 30
path = "/"
protocol = "HTTP"
timeout = 3
healthy_threshold = 3
unhealthy_threshold = 3
}
tags = {
Name = "app-tg"
}
}
resource "aws_lb_listener" "app_lb_listener" {
load_balancer_arn = aws_lb.app_lb.arn
port = "80"
protocol = "HTTP"
default_action {
type = "forward"
target_group_arn = aws_lb_target_group.app_tg.arn
}
}
resource "aws_lb_target_group_attachment" "app_server_1" {
target_group_arn = aws_lb_target_group.app_tg.arn
target_id = aws_instance.app_server_1.id
port = 80
}
resource "aws_lb_target_group_attachment" "app_server_2" {
target_group_arn = aws_lb_target_group.app_tg.arn
target_id = aws_instance.app_server_2.id
port = 80
}
Configuring Auto Scaling Groups
Auto Scaling Groups (ASGs) automatically adjust the number of EC2 instances in response to changes in demand. This is essential for maintaining high availability and optimizing costs.
resource "aws_launch_configuration" "app_lc" {
image_id = "ami-0c55b159cbfafe1f0" # Amazon Linux 2 AMI (HVM)
instance_type = "t2.micro"
security_groups = [aws_security_group.private_sg.name]
associate_public_ip_address = false
lifecycle {
create_before_destroy = true
}
}
resource "aws_autoscaling_group" "app_asg" {
desired_capacity = 2
max_size = 3
min_size = 1
vpc_zone_identifier = [
aws_subnet.private_subnet_1.id,
aws_subnet.private_subnet_2.id
]
launch_configuration = aws_launch_configuration.app_lc.name
target_group_arns = [aws_lb_target_group.app_tg.arn]
health_check_type = "ELB"
health_check_grace_period = 300
lifecycle {
create_before_destroy = true
}
tags = [
{
key = "Name"
value = "app-asg"
propagate_at_launch = true
},
]
}
Setting Up a Highly Available Database with Amazon RDS
Amazon Relational Database Service (RDS) provides a managed relational database that is easy to set up, operate, and scale. For high availability, we will deploy a multi-AZ RDS instance.
resource "aws_db_instance" "app_db" {
allocated_storage = 20
storage_type = "gp2"
engine = "mysql"
engine_version = "8.0.28"
instance_class = "db.t2.micro"
name = "appdb"
username = "admin"
password = "YourStrongPassword"
parameter_group_name = "default.mysql8.0"
publicly_accessible = false
multi_az = true
skip_final_snapshot = true
vpc_security_group_ids = [aws_security_group.private_sg.id]
db_subnet_group_name = aws_db_subnet_group.main.name
tags = {
Name = "app-db"
}
}
resource "aws_db_subnet_group" "main" {
name = "main-subnet-group"
subnet_ids = [
aws_subnet.private_subnet_1.id,
aws_subnet.private_subnet_2.id
]
tags = {
Name = "main-subnet-group"
}
}
Deploying CloudWatch for Monitoring and Alerts
Monitoring is crucial for maintaining the health of your AWS environment. AWS CloudWatch provides monitoring and logging services for AWS resources.
resource "aws_cloudwatch_log_group" "app_log_group" {
name = "/aws/app"
retention_in_days = 7
}
resource "aws_cloudwatch_metric_alarm" "cpu_alarm" {
alarm_name = "high-cpu-usage"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = "2"
metric_name = "CPUUtilization"
namespace = "AWS/EC2"
period = "120"
statistic = "Average"
threshold = "80"
dimensions = {
InstanceId = aws_instance.app_server_1.id
}
alarm_actions = [aws_sns_topic.alarm.arn]
}
resource "aws_sns_topic" "alarm" {
name = "alarm-topic"
}
resource "aws_sns_topic_subscription" "alarm_subscription" {
topic_arn = aws_sns_topic.alarm.arn
protocol = "email"
endpoint = "your-email@example.com"
}
Conclusion
Deploying a high-availability full-stack architecture on AWS using Terraform involves multiple components working in harmony to ensure redundancy, scalability, and reliability. By following the steps outlined in this guide, you can set up a robust infrastructure that can handle varying loads and provide continuous service even in the face of failures.
References:
By leveraging Terraform and AWS, you can build highly available infrastructures that are resilient, scalable, and easy to manage. The code examples provided here can be customized to suit specific requirements and further optimized to meet business needs. Happy deploying!
Subscribe to my newsletter
Read articles from Nile Bits directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Nile Bits
Nile Bits
Nile Bits is a software company, focusing on outsourcing software development and custom software solutions. Our outsourcing software services and solutions are designed with a focus on secure, scalable, expandable and reliable business systems. Via our low cost, high quality and reliable outsourcing software services, we provide to our clients value for money and therefore client satisfaction.