πŸ“ Understanding Kubernetes Ingress - From Services to Sophisticated Routing

Sprasad PujariSprasad Pujari
4 min read

πŸš€ The Evolution of Kubernetes Networking

When Kubernetes was first introduced in December 2015 (version 1.0), the Ingress feature wasn't part of the package. Despite this, developers and organizations were quick to adopt Kubernetes, relying primarily on Services for their networking needs. But as Kubernetes usage skyrocketed, users began to encounter limitations, especially when it came to advanced routing and load balancing.

πŸ”„ Kubernetes Services: The Starting Point

Initially, Kubernetes Services provided basic load balancing using a round-robin technique. Let's break this down with a simple example:

  • Imagine you have two pods running your application

  • 10 requests come in to your service

  • Kubernetes would distribute these evenly:

    • 5 requests to Pod A

    • 5 requests to Pod B

While this worked for simple scenarios, it lacked the sophistication needed for complex, enterprise-level applications.

🏒 Enterprise Needs vs. Kubernetes Services

Commercial and enterprise-level load balancers offer a plethora of features that Kubernetes Services couldn't match, such as:

  1. Sticky Sessions

  2. TLS Termination

  3. Path-based Routing

  4. Host-based Routing

  5. Ratio-based Load Balancing

Moreover, exposing applications to the outside world required using a LoadBalancer service type, which came with its own set of challenges:

  • Managing static IP addresses (often incurring additional costs)

  • Cloud providers charging for each LoadBalancer service

πŸ’‘ Enter Kubernetes Ingress

To address these limitations, Kubernetes introduced the Ingress resource. This opened the door for various load balancer solutions to integrate with Kubernetes, including:

  • NGINX

  • F5

  • Ambassador

  • Traefik

  • HAProxy

πŸ”§ How Ingress Works

  1. Kubernetes creates an Ingress resource

  2. Load balancer companies create Ingress controllers

  3. DevOps engineers deploy the Ingress controller on the Kubernetes cluster (using Helm charts or YAML manifests)

  4. Developers create Ingress YAML resources to define routing rules

  5. The Ingress controller watches these resources and implements the specified routing logic

🌐 Real-world Example: Multi-service Web Application

Imagine you're running an e-commerce platform with multiple microservices:

  • Product Catalog Service

  • User Authentication Service

  • Order Processing Service

Without Ingress, you'd need separate LoadBalancer services for each, potentially leading to high costs and complex management. With Ingress, you can:

  1. Deploy a single Ingress controller (e.g., NGINX)

  2. Create an Ingress resource like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: ecommerce-ingress
spec:
  rules:
  - host: shop.example.com
    http:
      paths:
      - path: /products
        pathType: Prefix
        backend:
          service:
            name: product-catalog
            port: 
              number: 80
      - path: /auth
        pathType: Prefix
        backend:
          service:
            name: user-auth
            port: 
              number: 80
      - path: /orders
        pathType: Prefix
        backend:
          service:
            name: order-processing
            port: 
              number: 80

This Ingress resource allows you to:

  • Use a single domain (shop.example.com)

  • Route traffic to different services based on the URL path

  • Potentially implement SSL/TLS termination at the Ingress level

πŸ†š LoadBalancer vs. Ingress

While both can expose services externally, Ingress offers several advantages:

  • Cost-effective: One Ingress can route to multiple services

  • Advanced routing: Supports path-based and host-based routing

  • SSL/TLS termination: Centralized management of certificates

  • Name-based virtual hosting: Host multiple domains on a single IP

πŸŽ‰ Conclusion

Kubernetes Ingress has revolutionized how we handle external access to services within a cluster. It bridges the gap between basic Kubernetes Services and sophisticated enterprise load balancing needs, offering flexibility, cost-effectiveness, and powerful routing capabilities. As Kubernetes continues to evolve, Ingress remains a crucial component for managing traffic in modern, cloud-native applications.

Thank you for joining me on this journey through the world of cloud computing! Your interest and support mean a lot to me, and I'm excited to continue exploring this fascinating field together. Let's stay connected and keep learning and growing as we navigate the ever-evolving landscape of technology.

LinkedIn Profile: https://www.linkedin.com/in/prasad-g-743239154/

Project Details: Github URL-https://github.com/sprasadpujari/Kubernative_Projects/tree/main/Setup-kubernative-Cluster-Docker-Desktop/k8s-hello-world

Feel free to reach out to me directly at spujari.devops@gmail.com. I'm always open to hearing your thoughts and suggestions, as they help me improve and better cater to your needs. Let's keep moving forward and upward!

If you found this blog post helpful, please consider showing your support by giving it a round of applauseπŸ‘πŸ‘πŸ‘. Your engagement not only boosts the visibility of the content, but it also lets other DevOps and Cloud Engineers know that it might be useful to them too. Thank you for your support! πŸ˜€

Thank you for reading and happy deploying! πŸš€

Best Regards,

Sprasad

0
Subscribe to my newsletter

Read articles from Sprasad Pujari directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sprasad Pujari
Sprasad Pujari

Greetings! I'm Sprasad P, a DevOps Engineer with a passion for optimizing development pipelines, automating processes, and enabling teams to deliver software faster and more reliably.