Kubernetes Networking 101: Seeing the System Clearly

Manthan ParmarManthan Parmar
5 min read

When I started working with Kubernetes, networking felt like a maze. Pods vanished, services didn’t respond, and Ingress rules seemed like unsolvable puzzles. Each failure taught me something new, revealing how traffic flows through Kubernetes.

Now, I want to share that clarity with you, breaking down Kubernetes networking into simple, connected components: CNI, Services, Ingress, Service Mesh, and Gateway API. This isn’t a dense manual. It’s a guide forged from my mistakes to light your way. Let’s dive in and explore the system step by step.

Set Up Your Playground

Kubernetes networking is about flow, like water finding its path. Before experimenting, set up a safe environment. Use Minikube to run a local Kubernetes cluster, and ensure you have kubectlto manage it.

minikube start

These tools let you experiment without risk. I’ll walk you through each component with clear code and prerequisites.


CNI: The Network Foundation

Every city needs roads. The Container Network Interface (CNI) builds Kubernetes’ network, assigning each pod an IP address and connecting them. Early on, I deployed a cluster where pods couldn’t communicate due to a Flannel CNI misconfiguration. Checking logs with this command revealed a routing error:

kubectl logs -n kube-system

That taught me to always verify the basics. CNI plugins like Flannel or Calico enable this networking. Per the CNCF 2024 Survey, Calico secures 29% of clusters, while Cilium’s eBPF powers 18%.

Prerequisite: Start Minikube with a CNI plugin:

minikube start --network-plugin=cni --cni=flannel

Verify CNI health:

kubectl get pods -n kube-system

If Pods can’t communicate, check the logs. The network must be stable before traffic can flow.


Services: Reliable Addresses

Pods are like houses, constantly shifting. Services act as the cluster’s address book, providing a stable name for Pods. I once set up a LoadBalancer service for external traffic but forgot to enable Minikube’s tunnel, so nothing worked. Using this command helped me spot the issue:

kubectl describe service

Services come in types like

    • ClusterIP: The default type, used for internal cluster communication (68% of services). It creates a virtual IP accessible only within the cluster, ideal for pod-to-pod traffic. For example, a backend API might use ClusterIP to talk to a database.

      • NodePort: Exposes the service on each node’s IP at a specific port (30000–32767 range), useful for external access during development. I once used NodePort to test an app but learned it’s not ideal for production due to its limited port range (15% of services).

      • LoadBalancer: Exposes the service externally via a cloud provider’s load balancer, like AWS ELB (12% of services). It’s perfect for production apps needing scalable external access but requires a cloud environment or Minikube’s tunnel.

      • ExternalName: Maps a service to an external DNS name without creating a local proxy, useful for integrating with external services like a third-party API (4% of services). No cluster IP is assigned, keeping it lightweight.

Prerequisite: Ensure Minikube is running. For LoadBalancer, enable the tunnel:

minikube tunnel

Create a ClusterIP Service with this YAML:

apiVersion: v1
kind: Service
metadata:
  name: backend
spec:
  selector:
    app: backend
  ports:
    - port: 80
      targetPort: 8080
  type: ClusterIP

Test it from a Pod:

curl backend

If it fails, double-check the selector in your service definition. It’s often the culprit.


Ingress: The Web Gateway

External traffic needs an entry point. Ingress serves as the gateway for web requests, routing them by URL. I once misconfigured an Ingress rule, sending traffic to the wrong service due to a path typo. This command helped me find the error:

kubectl describe ingress

Ingress relies on controllers like NGINX, used in 40% of clusters (CNCF 2024).

Prerequisite: Enable Minikube’s Ingress addon:

minikube addons enable ingress

Define an Ingress resource:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
    - host: app.example.local
      http:
        paths:
          - path: /frontend
            pathType: Prefix
            backend:
              service:
                name: frontend
                port:
                  number: 80

Test it:

curl http://$(minikube ip)/frontend

Run minikube ipto get the IP and update /etc/hoststo map app.example.localto it. Precision is key to opening this gateway.


Service Mesh: Intelligent Routing

As clusters grow, communication gets complex. A service mesh manages service-to-service traffic, adding features like encryption and traffic splitting. Istio, used in 15% of clusters (CNCF 2024), seemed daunting until I focused on one feature. I once routed 10% of traffic to a new version using Envoy’s sidecar proxies, and it worked seamlessly.

Prerequisite: Install Istio:

curl -L https://istio.io/downloadIstio | sh -
cd istio-<version>
./bin/istioctl install --set profile=minimal
kubectl label namespace default istio-injection=enabled

Try a traffic split:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: backend
spec:
  hosts:
    - backend
  http:
    - route:
        - destination:
            host: backend-v1
          weight: 90
        - destination:
            host: backend-v2
          weight: 10

Test it:

curl backend

Start with one feature, like traffic splitting, and let the service mesh guide your learning.


Gateway API: Future-Proof Networking

Kubernetes networking evolves, and the Gateway API is the next step beyond Ingress, handling HTTP, TCP, and more. I hesitated to try it, fearing complexity, but setting up Contour showed its power for tasks like database traffic. About 20% of clusters now use the Gateway API (Kubernetes SIG-Network 2025).

Prerequisite: Install Contour:

kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

Define Gateway and HTTPRoute resources:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: my-gateway
spec:
  gatewayClassName: contour
  listeners:
    - name: http
      protocol: HTTP
      port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: my-route
spec:
  parentRefs:
    - name: my-gateway
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /frontend
      backendRefs:
        - name: frontend
          port: 80

Test it:

curl http://$(minikube ip)/frontend

Begin with HTTP routing, then explore TCP and other protocols as you gain confidence.


Conclusion

Kubernetes networking is about enabling a smooth flow. CNI builds the foundation, Services provides reliable addresses, Ingress directs web traffic, Service Mesh adds intelligent routing, and Gateway API opens future-proof paths. My early struggles came from not seeing how these pieces connect. Start with Minikube, use itkubectl describe to troubleshoot, and refer to the Kubernetes Docs when stuck. Embrace errors as part of learning.

Keep experimenting!


0
Subscribe to my newsletter

Read articles from Manthan Parmar directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Manthan Parmar
Manthan Parmar

I am a developer from Gujarat, India.