Day 72 of 90 Days of DevOps Challenge: All about Load Balancer


Yesterday, I explored two core components in EC2 instance management: Amazon Machine Images (AMI) and EBS Snapshots. These tools are essential for ensuring consistency, automating deployments, and backing up entire environments. I learned how AMIs help create identical instances rapidly, while snapshots serve as backups of EBS volumes for data restoration and disaster recovery.
Today, I’m diving into a critical piece of the high-availability puzzle: Load Balancers. In cloud environments, where scalability and uptime are crucial, load balancers help distribute network traffic efficiently across multiple servers, keeping our applications fast and resilient.
What is a Load Balancer?
A Load Balancer is a system that automatically distributes incoming network traffic across multiple servers (EC2 instances in AWS), ensuring no single server is overwhelmed.
Without load balancing:
One server may get all traffic → resulting in slowness or failure
Manual traffic management is required
High risk of downtime during spikes
With load balancing:
Traffic is evenly split between healthy servers
If a server goes down, traffic is routed to others
Automatic scaling becomes possible
Why Do We Need Load Balancers in DevOps?
In a DevOps-driven environment, where continuous delivery, fault tolerance, and scalability are vital, load balancers provide:
High Availability: If one server fails, others take over.
Horizontal Scaling: Add/remove instances easily.
Zero Downtime Deployments: Shift traffic between versions.
Health Monitoring: Send traffic only to healthy targets.
Global Reach: With proper routing and geo-balancing.
Types of Load Balancers
1. Application Load Balancer (ALB)
An Application Load Balancer (ALB) is an advanced Layer 7 load balancer provided by AWS (and used in other cloud platforms too), designed to route HTTP and HTTPS traffic based on the content of the request, like URL paths, hostnames, headers, query strings, and more.
Unlike a simple round-robin balancer, ALB offers content-based routing, which makes it ideal for microservices, web apps, and container-based workloads.
Features:
Routes based on HTTP/HTTPS, URL path, hostname, headers, cookies
Supports host-based and path-based routing
Ideal for microservices and containerized apps (e.g., ECS, EKS)
Native integration with WebSockets and Lambda
Works with Target Groups for fine-grained control
2. Network Load Balancer (NLB)
A Network Load Balancer works at Layer 4 of the OSI model (the transport layer) and is designed to handle millions of requests per second with ultra-low latency. It routes TCP, UDP, and TLS traffic based on IP protocol and port, not based on request content (like URLs or headers).
Features:
Handles TCP, UDP, TLS traffic
Ultra-low latency (handles millions of requests/sec)
Preserves source IP
Static IP support (or Elastic IPs)
Supports TLS termination
3. Gateway Load Balancer (GLB)
A Gateway Load Balancer (GWLB) is a type of AWS load balancer designed to deploy, scale, and manage third-party virtual appliances (like firewalls, intrusion detection/prevention systems, deep packet inspection tools) transparently.
It operates at Layer 3/4 (network/transport layers) and routes all IP traffic, not just HTTP or TCP.
Features:
Deploys and manages third-party virtual appliances (e.g., firewalls, intrusion detection systems)
Uses GENEVE protocol for tunneling
Transparent to both client and appliance
Provides scalability, HA, and elasticity to appliances
4. Classic Load Balancer (CLB)
The Classic Load Balancer (CLB) is the first-generation load balancer offered by AWS. It can handle both Layer 4 (TCP) and Layer 7 (HTTP/HTTPS) traffic but lacks the advanced routing features offered by newer load balancers like ALB and NLB.
CLB is deprecated for most new use cases but still supported for backward compatibility and legacy applications.
Features:
Earlier generation of ELB
Basic round-robin or sticky sessions
Does not support advanced routing (host/path-based)
Supports EC2-Classic and VPC-based networking
Basic load balancing across multiple EC2 instances
Can handle HTTP, HTTPS, TCP, and SSL traffic
Provides sticky sessions via cookies
SSL termination supported
Cross-zone load balancing supported
Health checks on target instances
Core Components of load balancer in AWS
1. Listeners
A listener is a process that checks for connection requests.
It defines the protocol (HTTP/HTTPS/TCP) and port (e.g., 80, 443, 8080) that the load balancer uses to receive requests.
Each load balancer must have at least one listener.
Example: ALB listener on HTTP port 80, NLB listener on TCP port 443
2. Target Groups
A target group routes requests to one or more registered targets, such as: EC2 instances, IP addresses, Lambda functions (ALB only).
You can define: Health check settings and Routing rules.
A single load balancer can route traffic to multiple target groups using listener rules.
Example:
/api
routes to a target group with API servers,/images
routes to a target group with static image servers.
3. Targets
Targets are the actual resources that handle the traffic.
Targets can be: EC2 instances (with instance ID or private IP), Containers (hosted on EC2 or ECS), Lambda functions (ALB only)
Note: Targets must be registered in the target group and pass health checks to receive traffic.
4. Health Checks
Used to monitor the health of each registered target.
Only healthy targets receive traffic.
Configurable settings: Protocol (HTTP, HTTPS, TCP), Port, Path (for ALB, like
/health
)Thresholds:
HealthyThreshold: No. of successes before declaring healthy
UnhealthyThreshold: Failures before marking unhealthy
Interval: Time between checks
Timeout: Time to wait for a response
5. Listener Rules (ALB Only)
Listener rules determine how the load balancer routes traffic to target groups.
Rules consist of conditions (like path or host-based) and actions (forward, redirect, fixed-response).
7. Cross-Zone Load Balancing
Ensures that traffic is evenly distributed across all registered targets in all Availability Zones (AZs).
Helpful for high availability and better resource utilization.
ALB: Enabled by default and free
NLB: Can be enabled (additional cost)
8. Sticky Sessions (Session Affinity)
Ensures that the same user/client is always routed to the same backend target.
Based on:
Application cookies (custom)
AWS-generated cookies (
AWSALB
)
Useful for stateful applications that store session data on local instances.
9. SSL/TLS Certificates (For HTTPS listeners)
Used for secure communication.
You can attach an SSL certificate from AWS Certificate Manager (ACM) or import your own.
Supports SSL termination at the load balancer (decrypts traffic before forwarding to targets).
10. Logging and Monitoring
Integrated with CloudWatch for:
Metrics (RequestCount, TargetResponseTime, HTTPCode_ELB_5XX, etc.)
Alarms
Can enable Access Logs to store detailed request information in S3 for analysis or debugging.
How to Create a Load Balancer in AWS
Creating an Application Load Balancer (ALB)
Open EC2 Console → Go to Load Balancers → Click “Create Load Balancer” → Choose Application Load Balancer.
Basic Configurations:
Name your ALB.
Choose
internet-facing
orinternal
.Select IP type: IPv4 or Dualstack.
Listener: Start with HTTP (port 80).
Availability Zones:
- Select a VPC and at least two subnets in different AZs.
Security Group:
- Allow traffic on port 80 (HTTP) or 443 (HTTPS).
Create Target Group:
Target type: Instance/IP/Lambda.
Protocol: HTTP.
Register EC2 instances.
Health Check:
Protocol: HTTP.
Path:
/
or/health
.Set thresholds and intervals.
Listener Rules:
Add routing rules (e.g., path-based or host-based).
Link them to the right target group.
Review & Create the ALB.
Use the provided DNS name (e.g.,
myalb-1234.elb.amazonaws.com
) to reach your app. You can later map it to a custom domain
Final Thoughts
Understanding Load Balancers is a critical skill in DevOps. Whether it's building scalable architectures, ensuring zero downtime, or automating traffic routing, Load Balancers sit at the heart of cloud-native design.
Today’s session helped me understand not just how to set them up in AWS, but also why they’re essential for any high-performance, fault-tolerant, and production-ready environment.
Stay Tuned for day 73!!
Subscribe to my newsletter
Read articles from Vaishnavi D directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
