π Understanding Kubernetes Ingress - From Services to Sophisticated Routing
π The Evolution of Kubernetes Networking
When Kubernetes was first introduced in December 2015 (version 1.0), the Ingress feature wasn't part of the package. Despite this, developers and organizations were quick to adopt Kubernetes, relying primarily on Services for their networking needs. But as Kubernetes usage skyrocketed, users began to encounter limitations, especially when it came to advanced routing and load balancing.
π Kubernetes Services: The Starting Point
Initially, Kubernetes Services provided basic load balancing using a round-robin technique. Let's break this down with a simple example:
Imagine you have two pods running your application
10 requests come in to your service
Kubernetes would distribute these evenly:
5 requests to Pod A
5 requests to Pod B
While this worked for simple scenarios, it lacked the sophistication needed for complex, enterprise-level applications.
π’ Enterprise Needs vs. Kubernetes Services
Commercial and enterprise-level load balancers offer a plethora of features that Kubernetes Services couldn't match, such as:
Sticky Sessions
TLS Termination
Path-based Routing
Host-based Routing
Ratio-based Load Balancing
Moreover, exposing applications to the outside world required using a LoadBalancer service type, which came with its own set of challenges:
Managing static IP addresses (often incurring additional costs)
Cloud providers charging for each LoadBalancer service
π‘ Enter Kubernetes Ingress
To address these limitations, Kubernetes introduced the Ingress resource. This opened the door for various load balancer solutions to integrate with Kubernetes, including:
NGINX
F5
Ambassador
Traefik
HAProxy
π§ How Ingress Works
Kubernetes creates an Ingress resource
Load balancer companies create Ingress controllers
DevOps engineers deploy the Ingress controller on the Kubernetes cluster (using Helm charts or YAML manifests)
Developers create Ingress YAML resources to define routing rules
The Ingress controller watches these resources and implements the specified routing logic
π Real-world Example: Multi-service Web Application
Imagine you're running an e-commerce platform with multiple microservices:
Product Catalog Service
User Authentication Service
Order Processing Service
Without Ingress, you'd need separate LoadBalancer services for each, potentially leading to high costs and complex management. With Ingress, you can:
Deploy a single Ingress controller (e.g., NGINX)
Create an Ingress resource like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ecommerce-ingress
spec:
rules:
- host: shop.example.com
http:
paths:
- path: /products
pathType: Prefix
backend:
service:
name: product-catalog
port:
number: 80
- path: /auth
pathType: Prefix
backend:
service:
name: user-auth
port:
number: 80
- path: /orders
pathType: Prefix
backend:
service:
name: order-processing
port:
number: 80
This Ingress resource allows you to:
Use a single domain (shop.example.com)
Route traffic to different services based on the URL path
Potentially implement SSL/TLS termination at the Ingress level
π LoadBalancer vs. Ingress
While both can expose services externally, Ingress offers several advantages:
Cost-effective: One Ingress can route to multiple services
Advanced routing: Supports path-based and host-based routing
SSL/TLS termination: Centralized management of certificates
Name-based virtual hosting: Host multiple domains on a single IP
π Conclusion
Kubernetes Ingress has revolutionized how we handle external access to services within a cluster. It bridges the gap between basic Kubernetes Services and sophisticated enterprise load balancing needs, offering flexibility, cost-effectiveness, and powerful routing capabilities. As Kubernetes continues to evolve, Ingress remains a crucial component for managing traffic in modern, cloud-native applications.
Thank you for joining me on this journey through the world of cloud computing! Your interest and support mean a lot to me, and I'm excited to continue exploring this fascinating field together. Let's stay connected and keep learning and growing as we navigate the ever-evolving landscape of technology.
LinkedIn Profile: https://www.linkedin.com/in/prasad-g-743239154/
Project Details: Github URL-https://github.com/sprasadpujari/Kubernative_Projects/tree/main/Setup-kubernative-Cluster-Docker-Desktop/k8s-hello-world
Feel free to reach out to me directly at spujari.devops@gmail.com. I'm always open to hearing your thoughts and suggestions, as they help me improve and better cater to your needs. Let's keep moving forward and upward!
If you found this blog post helpful, please consider showing your support by giving it a round of applauseπππ. Your engagement not only boosts the visibility of the content, but it also lets other DevOps and Cloud Engineers know that it might be useful to them too. Thank you for your support! π
Thank you for reading and happy deploying! π
Best Regards,
Sprasad
Subscribe to my newsletter
Read articles from Sprasad Pujari directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Sprasad Pujari
Sprasad Pujari
Greetings! I'm Sprasad P, a DevOps Engineer with a passion for optimizing development pipelines, automating processes, and enabling teams to deliver software faster and more reliably.