Mastering Azure Kubernetes Service (AKS): A Complete Beginner-to-Expert Guide Part 2

In this blog, we’ll explore how network communication flows within AKS starting from communication inside a Pod to how services are accessed from the internet.

1. Container-to-Container Communication (Within a Pod)

How It Works

In Kubernetes, a Pod is the smallest deployable unit and can run one or more containers. When multiple containers run within the same pod, they share the same network namespace. This means:

  • They share the same IP address.

  • They share localhost (127.0.0.1).

  • They can communicate over inter-process communication (IPC) or directly via the filesystem if volumes are shared.

Communication Method

  • Containers communicate via localhost and the assigned port of each container.

  • Example: A sidecar container logging data from a main application container via localhost:8080.

Use Case: Sidecar Pattern

A common example is a sidecar container used for logging, monitoring, or proxying requests. Both containers talk to each other using loopback interface with no need for external networking.

2. Pod-to-Pod Communication

IP Address Allocation

Every pod in AKS gets a unique IP address, either via Azure CNI (recommended) or Kubenet (basic). Each pod communicates directly via pod IP.

  • Azure CNI: Pods get real VNet IPs.

  • Kubenet: Pods get IPs from a bridge network, with NAT for external access.

Intra-node Communication (Same Node)

  • Pods on the same node use direct IP routing via the node's virtual network bridge.

  • Minimal latency; direct and efficient.

Inter-node Communication (Cross Nodes)

  • AKS nodes are in an Azure Virtual Network (VNet).

  • Pod-to-pod communication across nodes uses VNet routing.

  • Azure sets up appropriate User Defined Routes (UDRs) or VNet peering, depending on the setup.

Role of kube-proxy

  • Handles IPTables or IPVS rules to route pod traffic.

  • Ensures traffic is properly directed regardless of node location.

Network Policies

  • Define who can talk to whom at the pod level.

  • Control ingress and egress using Kubernetes NetworkPolicy objects.

  • Azure supports Calico for network policies in AKS.


3. Pod-to-Service Communication

Services in Kubernetes

A Service is an abstraction that defines a stable endpoint (ClusterIP) for accessing pods.

Types of Services:

  • ClusterIP (default): Internal-only access.

  • NodePort: Exposes service on a static port on each node.

  • LoadBalancer: Provisions an Azure Load Balancer with a public/private IP.

  • Headless: No ClusterIP; direct access to pod IPs via DNS.

DNS Resolution

  • AKS runs CoreDNS for name resolution.

  • Pods access services using DNS names like my-service.default.svc.cluster.local.

  • CoreDNS resolves this to the ClusterIP, and traffic is load-balanced to a pod.

kube-proxy Mechanism

  • Maintains routing rules to forward traffic from service IP to one of the pod IPs behind it.

  • Implements round-robin load balancing across pods.

Example

An application pod querying a backend service via its DNS name doesn’t need to know pod IPs. It simply makes a request to backend-service:80 and kube-proxy routes it to a healthy backend pod.

4. Internet-to-Service Communication

LoadBalancer Service

When a service is exposed as LoadBalancer, AKS provisions an Azure Load Balancer.

  • Assigns a public IP.

  • Routes external traffic to a NodePort on AKS nodes.

  • kube-proxy forwards the request to the appropriate pod.

Ingress Controller (Advanced Traffic Management)

To manage multiple services under a single IP, use an Ingress Controller.

  • Popular options: NGINX Ingress, Azure Application Gateway Ingress Controller (AGIC).

  • Supports path-based and host-based routing.

  • Simplifies SSL termination and HTTP routing.

Security Measures

  • Protect external endpoints using Azure Network Security Groups (NSGs).

  • Apply Web Application Firewall (WAF) with Application Gateway.

  • Use Azure Firewall or Private Link for restricted access.

Example

Expose a web application to the internet via a LoadBalancer service:

  1. Define service as type LoadBalancer.

  2. Azure assigns a public IP.

  3. Traffic hits Azure Load Balancer → NodePort → kube-proxy → Pod.

Container networking

Kubernetes leverages Container Networking Interface (CNI) plugins to handle networking within its clusters. These CNIs are tasked with assigning IP addresses to pods, managing network routing between pods, and handling Kubernetes Service routing, among other functions.

Azure Kubernetes Service (AKS) offers a variety of CNI plugins to suit different networking needs in your clusters:

  • Azure CNI overlay (overlay network model): Private and scalable

  • Azure CNI Node Subnet (flat network model): Use it when AKS needs to communicate with other Azure resources.

  • kubenet: Basic networking option for Azure Kubernetes Service (AKS). Not recommended.

Azure Virtual Network models

Choosing a CNI plugin for your AKS cluster depends on the networking model that best fits your needs. Each model has its own advantages and disadvantages to consider when planning your AKS cluster.

AKS supports two main networking models: overlay network and flat network:

  • Overlay Network Model: This is the most common model in Kubernetes. Pods receive IP addresses from a private, logically separate CIDR within the Azure virtual network subnet where AKS nodes are deployed. This model offers simpler and improved scalability compared to the flat network model.

  • Flat Network Model: In this model, pods are assigned IP addresses from the same subnet as the AKS nodes within the Azure virtual network. Traffic leaving the cluster isn’t SNAT’d, and the pod IP address is directly exposed to the destination. This model is useful for scenarios where exposing pod IP addresses to external services is necessary.

Overlay model

Overlay networking is the most common model in Kubernetes. Pods receive IP addresses from a private, separate CIDR within the Azure VNet subnet where AKS nodes are deployed, offering simpler and enhanced scalability compared to the flat network model.

Pods communicate directly, with outbound traffic SNAT’d to the node’s IP address and inbound traffic routed through a service like a load balancer. This hides pod IP addresses behind node IPs, reducing the number of VNet IP addresses needed.

Choosing a networking model

Azure CNI offers two IP addressing options for pods: The traditional configuration that assigns VNet IPs to pods and Overlay networking. The choice of which option to use for your AKS cluster is a balance between flexibility and advanced configuration needs. The following considerations help outline when each network model might be the most appropriate.

Use Overlay networking when:

  • You would like to scale to a large number of pods, but have limited IP address space in your VNet.

  • Most of the pod communication is within the cluster.

  • You don’t need advanced AKS features, such as virtual nodes.

Use the traditional VNet option when:

  • You have available IP address space.

  • Most of the pod communication is to resources outside of the cluster.

  • Resources outside the cluster need to reach pods directly.

  • You need AKS advanced features, such as virtual nodes.

IP address planning

Your Azure VNet subnet must be large enough to accommodate your cluster, which depends on whether you’re using an overlay network or a flat network.

Overlay networks

Azure CNI Overlay networking simplifies IP management by assigning pod IPs from a separate, private CIDR range, not the virtual network (VNet) subnet. This means your VNet subnet can be smaller, as it only needs to accommodate node IPs. However, you must carefully plan the private CIDR range to ensure sufficient IP addresses for your pods, considering future scaling. Each node gets a /24 subnet for pods, so the overall overlay network subnet must accommodate the total number of nodes and their associated pod IPs.

Flat networks

In Azure CNI Pod Subnet (a type of flat network), both nodes and pods receive IP addresses directly from your virtual network (VNet). This necessitates a larger VNet subnet compared to overlay networks. To accommodate this, you must meticulously plan for the maximum number of nodes and pods your cluster will require. Additionally, because nodes and pods utilize separate subnets within your VNet, you must plan and allocate IP ranges for both independently.

Kubernetes Services

Kubernetes Services provide stable network access to groups of pods using a consistent IP address or DNS name and port. This simplifies application exposure within or outside the cluster, eliminating the need for manual pod-level network management

There are several types of Services, each suited for different use cases:

  • ClusterIP: Exposes the Service on an internal IP within the cluster. This is the default type and is used for communication between services within the cluster.

  • NodePort: Exposes the Service on each node’s IP at a static port. This allows external traffic to access the Service.

  • LoadBalancer: Exposes the Service externally using a cloud provider’s load balancer.

  • ExternalName/External Service: Maps the Service to the contents of the externalName field (e.g., my-app.example.com), allowing you to proxy to an external service.

Key Points

  • Container Networking: Enables communication between containers, pods, and external networks within a Kubernetes environment.

  • Container Communication within a Pod (container-to-container): Containers within a pod share the same network namespace, allowing direct communication via localhost.

  • Pod Communication (pod-to-pod): Pods communicate with each other using their IP addresses, facilitated by the cluster’s network fabric.

  • Pod-to-Service Networking: Kubernetes Services provide a stable virtual IP for pods to access other pods, abstracting underlying pod IPs.

  • Internet-to-Service Networking: Ingress controllers or LoadBalancer Services expose Kubernetes Services to the internet, routing external traffic to pods.

  • Azure CNI (Flat): Pods get IPs directly from the VNet subnet, offering direct VNet connectivity but potentially exhausting IP space.

  • Azure CNI Overlay: Pods use a separate, internal network with NAT for external access, optimizing IP usage and scalability.

  • Kubernetes Services: Provide a stable network endpoint for accessing pods, abstracting away individual pod IPs for reliable communication.

0
Subscribe to my newsletter

Read articles from Mostafa Elkattan directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mostafa Elkattan
Mostafa Elkattan

Multi Cloud & AI Architect with 18+ years of experience Cloud Solution Architecture (AWS, Google, Azure), DevOps, Disaster Recovery. Forefront of driving cloud innovation. From architecting scalable infrastructures to optimizing. Providing solutions with a great customer experience.