L3 Load Balancer for gRPC servers

Subham SinghSubham Singh
4 min read

Ensuring efficient communication and scalability between microservices is critical in modern distributed systems. gRPC, a high-performance RPC framework, is widely used for its speed and low latency. However, as the demand for gRPC servers grows, managing traffic distribution becomes challenging.

This is where a Layer 3 (L3) load balancer plays a vital role. Operating at the network layer, an L3 load balancer efficiently distributes incoming traffic across multiple gRPC servers based on IP routing rules. This ensures optimal resource utilization, improves fault tolerance, and reduces the risk of server overload.

In this article, we will explore the key concepts, architecture, and implementation steps involved in setting up an L3 load balancer for gRPC servers, enabling your system to handle traffic with higher reliability and performance.

Why Layer 3 load balancer?

  • Compared to L4 or L7, L3 load balancing is simpler to implement and requires fewer resources. Since it does not analyze transport protocols (e.g., TCP, UDP) or application protocols (e.g., HTTP), it can handle large volumes of traffic efficiently with minimal processing overhead.

  • L3 load balancers operate at the IP routing level, making decisions based on source and destination IP addresses. This approach avoids the overhead of inspecting transport-level (L4) or application-level (L7) data, resulting in lower latency and faster traffic routing.

  • Since L3 load balancers operate at the network layer, they scale well for large, distributed systems where fine-grained decisions (L4 or L7) are unnecessary. They can effectively distribute traffic to backend servers based on network topology and IP routing rules.

When L4 or L7 May Be a Better Choice

  • L4 (Transport Layer): L4 is more suitable for session-aware routing or traffic balance based on TCP/UDP port numbers.

  • L7 (Application Layer): L7 load balancers are essential if your application requires advanced features like URL-based routing, content inspection, or application-level security.

In the context of gRPC servers, where performance and low latency are key priorities, L3 load balancing is often the ideal choice unless specific transport- or application-layer features are required.

Steps to setup IPVS and Keepalived

  • Install keepalived and ipvsadm
# For Ubuntu/Debian
sudo apt install keepalived ipvsadm

# For CentOS/RHEL
sudo yum install keepalived ipvsadm
  • Verify the installation by running
# Check Keepalived version
keepalived -v

# Check if IPVS is working
sudo ipvsadm -Ln
  • Copy and paste this configuration at /etc/keepalived/keepalived.conf
global_defs {
    router_id LVS_MAIN
    enable_script_security
}

virtual_server 192.168.122.1 8080 {
    delay_loop 6
    lb_algo rr             
    lb_kind DR             
    protocol TCP            
    persistence_timeout 60  

    real_server 172.22.35.1 8080 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
            connect_port 8080
        }
    }

    real_server 172.22.35.2 8080 {
        weight 1
        TCP_CHECK {
            connect_timeout 3
            retry 3
            delay_before_retry 3
            connect_port 8080
        }
    }
}

Assuming you have 2 gRPC server running on 172.22.35.1 and 172.22.35.1 , the IP associate with virtual server is the IP (Virtual IP not the physical IP of the current linux machine) we will provide to the client.

Key details:

  • Load balancing on IP 192.168.122.1 port 8080

  • Round-robin (rr) scheduling

  • Persistence timeout of 60 seconds

  • Two real servers: 172.22.35.1 and 172.22.35.2

  • Direct Routing (DR) mode

  • Equal weight (1) for both servers

After saving this file, you need to start the keepalived service.

# Start Keepalived
sudo systemctl start keepalived

# Check Keepalived status
sudo systemctl status keepalived

This will create route mapping in IPVS table. You can verify that by

# Check the IPVS table
sudo ipvsadm -Ln

Output

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.122.1:8080 rr persistent 60
  -> 172.22.35.1:8080             DR     1      0          0
  -> 172.22.35.2:8080             DR     1      0          0

First, check if other machines can reach the load balancer's real IP and then check for Virtual IP.
If the VIP is not reachable, you may need to

  1. Add the VIP 192.168.122.1 as a loopback alias on the real server:
ip addr add 192.168.122.1/32 dev lo
  1. Prevent the real server from responding to ARP requests for the VIP:
echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
  1. (Optional) Add these configurations to /etc/sysctl.conf to make them persistent across reboots:
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
  • Apply the changes:
sudo sysctl -p

Testing the Load Balancer

Provide the gRPC client with the Virtual IP (192.168.122.1) and port (8080). Verify that traffic is distributed across the real servers:

  1. Test connectivity to the real server IPs and Virtual IP.

  2. Monitor traffic distribution using IPVS:

     sudo ipvsadm -L --stats
    

Conclusion

Implementing an L3 load balancer with IPVS and Keepalived enables efficient traffic distribution for gRPC servers, ensuring high performance and reliability. This setup is particularly well-suited for systems prioritizing low latency and scalability. With minimal processing overhead and straightforward configuration, L3 load balancing is a robust solution for modern distributed applications.

Grpc Server-client demo source code: https://github.com/subham-proj/gRPC-IPVS

0
Subscribe to my newsletter

Read articles from Subham Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Subham Singh
Subham Singh

Hi, I'm Subham, a software engineer, and tech enthusiast. Welcome to my developer blog. I will be talking about the fascinating world of programming and all things tech. Let's learn and grow together in this ever-evolving landscape!