Kubernetes Services: How Pods Talk to Each Other—and to You

Services: The Silent Backbone of Kubernetes Networking
Before anything else, let’s address the name: Service.
It’s not just a formality—it’s a promise:
“No matter how many pods come or go, I’ll always be here, at the same place, ready to serve.”
In Kubernetes, services enable communication:
Among internal app components (e.g., frontend → backend → database)
From external clients (like your browser) to internal pods
But it’s deeper than that.
Why You Can’t Just Knock on a Pod’s Door
Let’s take a typical setup:
Node IP (Public):
15.207.114.2
Pod Network CIDR (Private):
10.244.0.0/16
Pod IP:
10.244.1.5
Your Laptop:
192.168.29.22
Now try this from your laptop:
curl http://10.244.1.5:80
🚫 Doesn’t work.
Because 10.244.x.x
is not routable from outside the cluster. It’s like trying to visit an apartment deep inside a gated community without a main gate address.
But from within the same node? ✅ Works.
From another pod in the same cluster? ✅ Works.
From outside the cluster? ❌ Nope.
Why Not SSH and cURL from Inside?
Yes, you can SSH into the node and use curl
to hit the pod IP. But that’s like breaking into a data center to press a power button manually.
Technically possible. Logically stupid.
We need a proper mechanism for exposing pods to the right audiences safely.
⚠️ The Problem
You want to access a web server running inside a pod from your browser (outside the cluster), but pods have dynamic, private IPs.
And even internally, relying on pod IPs is fragile—pods can go down, restart, and come back with a new IP.
Enter: Kubernetes Services
A Service is a Kubernetes object. Like a Deployment
or a Pod
, you define it in YAML.
It abstracts a group of pods behind a stable virtual IP. It routes traffic to the right pod(s), no matter how many there are or where they are running.
Type 1: NodePort Service
Why “NodePort”? Because it opens a specific port on every node in the cluster.
That port forwards to your pod.
Imagine this:
curl http://<NodeIP>:<NodePort>
And Kubernetes takes care of the rest.
Sample NodePort YAML (With Full Comments)
apiVersion: v1
kind: Service
metadata:
name: webapp-service # Name of the service
spec:
type: NodePort # Type of service: NodePort
selector: # Selects pods with these labels
app: webapp
ports:
- port: 80 # Port exposed *inside* the cluster
targetPort: 8080 # Port on the pod that runs the app
nodePort: 30080 # Port on the node that maps to targetPort (range: 30000-32767)
If you omit
nodePort
, Kubernetes will auto-assign one in the valid range.If you omit
targetPort
, it defaults toport
.But
port
is mandatory.
But Which Pod?
When 100s of pods run on port 8080
(discussing port 8080 as we have configured targetPort to 8080 in above yaml), how does the service know where to route?
Answer: selector
It matches pod labels. Any pod with app: webapp
becomes a target.
📌 TL;DR for NodePort
“NodePort exposes your pod to the outside world via a static port on each node IP. It uses labels to forward traffic to matching pods.”
Type 2: ClusterIP Service (Default Service)
Why “ClusterIP”? Because it creates a virtual IP inside the cluster. No external access.
Great for service-to-service communication like:
frontend → backend
backend → database
Pods have dynamic IPs. ClusterIP gives a stable IP for access.
Sample ClusterIP YAML
apiVersion: v1
kind: Service
metadata:
name: backend-service
spec:
type: ClusterIP # Default if you omit 'type'
selector:
app: backend
ports:
- port: 80 # Service port inside cluster
targetPort: 8080 # Actual port on pod
Now, frontend
can just:
curl http://backend-service:80
No need to know which pod, which IP, or which node.
📌 TL;DR for ClusterIP
“ClusterIP enables internal communication between services using a stable IP, hiding pod-level complexity.”
Type: LoadBalancer
Why “LoadBalancer”? Because it provisions an actual load balancer (cloud provider dependent) and routes traffic to pods.
Use case: Production environments on cloud.
Three Real-World Scenarios
Scenario | Behavior |
Single Pod on Single Node | Service routes all traffic to that pod |
Multiple Pods on One Node | Round robin among pods |
Multiple Pods on Multiple Nodes | Still round robin, across nodes; kube-proxy handles routing internally |
From the user’s perspective:
http://a1b2c3d4.us-east-1.elb.amazonaws.com
Just one URL. No matter how many pods. No matter which node. Seamless.
LoadBalancer YAML (Cloud)
apiVersion: v1
kind: Service
metadata:
name: prod-webapp
spec:
type: LoadBalancer # Tells K8s to provision LB (if supported)
selector:
app: webapp
ports:
- port: 80 # Exposed externally
targetPort: 8080 # Pod’s internal port
Cloud provider (AWS, GCP, Azure) provisions an external IP/URL.
If You’re Not on Cloud
Use VirtualBox + HAProxy/Nginx
Manually configure LB
Painful, messy, error-prone
Not scalable
📌 TL;DR for LoadBalancer
“LoadBalancer exposes your service to the internet using cloud provider’s native load balancer, abstracting pod and node details.”
Summary Table
Type | Exposes To | Use Case | Requires Cloud |
ClusterIP | Inside Cluster | Internal service communication | No |
NodePort | External via IP:Port | Dev/test from outside | No |
LoadBalancer | External via URL | Prod, scalable access | Yes |
Final Words
Kubernetes Services are not magic. They are network abstractions that give you a stable, reliable way to reach ever-changing pods in a dynamic cluster.
If you understand:
✅ Why you can’t hit a pod directly
✅ What each service type offers
✅ How to define them precisely
Then you’re already better than 90% of devs using Kubernetes today.
Subscribe to my newsletter
Read articles from Vijay Belwal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
