Kubernetes Networking: Mastering Services, Ingress, Network Policies, DNS, and CNI Plugins

Rajat ChauhanRajat Chauhan
11 min read

Kubernetes networking is a foundational aspect that ensures seamless communication between the various components within a cluster. It encompasses a range of concepts and tools that manage how pods communicate with each other, how traffic is routed, and how policies are enforced to secure and manage these communications. This blog will guide you through essential Kubernetes networking concepts, including Services, Ingress, Network Policies, DNS, and Container Network Interface (CNI) plugins.

1. Services in Kubernetes

What are Services?

In Kubernetes, a Service is an abstraction that defines a logical set of pods and a policy by which to access them. Services enable the exposure of a group of pods through a single IP address and DNS name, ensuring stable network access even when the underlying pods change dynamically. This abstraction allows Kubernetes to decouple the service consumers from the actual pods providing the service, making it easier to manage and scale applications.

When you create a Service in Kubernetes, it automatically assigns a stable IP address and DNS name to the service. This stability is crucial because the IP addresses of the pods themselves can change over time due to scaling, updates, or failures. The Service acts as a load balancer, distributing traffic across the pods, ensuring that consumers can always access the service regardless of the state of the individual pods.

Key Features of Services:

  • Stable Networking Identity: Services provide a consistent IP address and DNS name for accessing a group of pods, even as the pods themselves are created, destroyed, or rescheduled.

  • Load Balancing: Kubernetes Services can automatically load balance traffic across multiple pods, ensuring high availability and performance.

  • Service Discovery: Services are registered with Kubernetes DNS, enabling pods to discover and communicate with each other using DNS names instead of hardcoding IP addresses.

  • Decoupling: By abstracting the underlying pods, Services enable you to manage applications more flexibly, as consumers of the service do not need to be aware of the changes happening at the pod level.

Types of Services:

Kubernetes provides different types of Services to cater to various networking needs within and outside the cluster:

  • ClusterIP (default):

    • Description: This is the default type of Service in Kubernetes. It exposes the service on a cluster-internal IP, making it accessible only within the cluster. ClusterIP is ideal for internal communication between microservices within the same Kubernetes cluster.

    • Use Case: Internal microservices communication. For example, if you have a backend service that must be accessed only by the backend service within the same cluster.

    • Example: A ClusterIP service would be used when your pods need to communicate internally within the cluster, like accessing a database or a microservice that should not be exposed externally.

        apiVersion: v1
        kind: Service
        metadata:
          name: internal-service
        spec:
          type: ClusterIP
          selector:
            app: my-internal-app
          ports:
          - protocol: TCP
            port: 80
            targetPort: 8080
      
  • NodePort:

    • Description: The NodePort service type exposes the service on each node's IP at a static port (between 30000 and 32767). This allows external traffic to access the service by requesting <NodeIP>:<NodePort>. NodePort is useful for simple external access and for testing purposes.

    • Use Case: Direct external access to a service from outside the cluster. For example, when you want to expose a service for testing from your local machine or from outside the Kubernetes cluster.

    • Example: A NodePort service might be used when you temporarily expose an application to external users without setting up a load balancer.

        apiVersion: v1
        kind: Service
        metadata:
          name: external-service
        spec:
          type: NodePort
          selector:
            app: my-external-app
          ports:
          - protocol: TCP
            port: 80
            targetPort: 8080
            nodePort: 30007
      
  • LoadBalancer:

    • Description: The LoadBalancer service type automatically provisions an external load balancer from the cloud provider (e.g., AWS, GCP, Azure) to expose the service outside the cluster. This service type is typically used in cloud environments where you must expose an application to the internet or an external network.

    • Use Case: When you need to provide a publicly accessible endpoint for your service, such as a web application or an API that clients access over the internet.

    • Example: A LoadBalancer service would be used to expose a production-grade web application to users, with traffic distributed across multiple pods for high availability.

        apiVersion: v1
        kind: Service
        metadata:
          name: web-service
        spec:
          type: LoadBalancer
          selector:
            app: my-web-app
          ports:
          - protocol: TCP
            port: 80
            targetPort: 8080
      
  • ExternalName:

    • Description: The ExternalName service type maps a service to a DNS name, allowing Kubernetes to return a CNAME record with the external DNS name. This is useful when you need to access an external service (outside the Kubernetes cluster) using a Kubernetes DNS name.

    • Use Case: Redirecting traffic from within the cluster to an external service, such as an external database or third-party API, using a consistent name within the cluster.

    • Example: An ExternalName service might be used to provide a Kubernetes DNS name for an external database like mysql.example.com, so that your pods can use a consistent name for access.

        apiVersion: v1
        kind: Service
        metadata:
          name: external-db
        spec:
          type: ExternalName
          externalName: mysql.example.com
      

Benefits of Using Services in Kubernetes:

  • Scalability: Services enable horizontal scaling of applications by seamlessly adding or removing pods without impacting consumers.

  • High Availability: Services ensure high availability by distributing traffic across multiple pods and automatically handling pod failures.

  • Simplified Configuration: By abstracting the underlying pods, Services simplify the configuration and management of network traffic within a Kubernetes cluster.

  • Security: Services can be combined with Network Policies to enforce fine-grained security controls over which pods can communicate with each other.

Services in Kubernetes are essential for building robust, scalable, and maintainable applications, making it easier to manage networking and communication between various components of your application.

2. Ingress in Kubernetes

What is Ingress?

Ingress is an API object that manages external access to services within a Kubernetes cluster, typically HTTP or HTTPS traffic. Unlike Services that only expose a single service on a single IP, Ingress allows you to define a set of rules for routing external HTTP(s) traffic to specific services within your cluster, based on hostnames, URLs, and other criteria.

Why Use Ingress?

  • Load Balancing: Ingress can distribute traffic to multiple backend services based on the rules defined.

  • SSL Termination: Ingress can handle SSL/TLS termination, simplifying the process of securing your application.

  • Name-Based Virtual Hosting: Ingress allows routing based on the hostname, enabling multiple services to be exposed on the same IP address.

Key Components:

  • Ingress Controller:

    • Role: The Ingress Controller is responsible for fulfilling the Ingress API object by routing traffic according to the Ingress rules. It typically runs as a pod within the cluster and interacts with a cloud provider's load balancer or manages the load balancing itself.

    • Types: Common Ingress controllers include NGINX, HAProxy, and Traefik. Each controller may have specific features and limitations.

  • Ingress Resource:

    • Role: The Ingress Resource is an API object that defines the rules for routing external traffic to specific services. It includes specifications like hostnames, paths, backend services, and optional SSL configurations.

    • Example Use Case: Suppose you have two services, frontend and backend, running in your cluster. You can use Ingress to route traffic to these services based on the URL path. For example, requests to example.com/frontend can be routed to the frontend service, while requests to example.com/backend can be routed to the backend service.

Example of Ingress Configuration:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
spec:
  rules:
  - host: myapp.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-service
            port:
              number: 80

Advanced Features:

  • Annotations: Ingress resources can be customized using annotations to configure specific behaviors, such as rate limiting, redirection, or custom error pages.

  • TLS Configuration: Ingress can manage SSL/TLS certificates for HTTPS, allowing secure communication with external clients.

  • Rewrite Rules: Ingress can rewrite URL paths before forwarding them to the backend services, enabling flexible URL management.


3. Network Policies in Kubernetes

What are Network Policies?

Network Policies in Kubernetes are used to control the flow of traffic between pods within a cluster. By default, Kubernetes allows all traffic between pods, but Network Policies allow you to enforce restrictions, ensuring that only authorized traffic is permitted.

Why Use Network Policies?

  • Security: Network Policies provide fine-grained control over communication between pods, namespaces, and external endpoints, enhancing the security posture of your applications.

  • Isolation: You can isolate different components of your application by restricting traffic to specific pods or namespaces, preventing unauthorized access.

Key Concepts:

  • Pod Selector: Defines the pods to which the Network Policy applies, typically using labels.

  • Ingress and Egress Rules: Specify the allowed or denied traffic for incoming (ingress) and outgoing (egress) connections.

  • Namespaces: Network Policies can be applied within a specific namespace or across multiple namespaces.

Example of Network Policy Configuration:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-nginx
spec:
  podSelector:
    matchLabels:
      app: nginx
  policyTypes:
  - Ingress
  - Egress
  ingress:
  - from:
    - ipBlock:
        cidr: 10.0.0.0/16  # Replace with your AWS VPC CIDR for ingress traffic
  egress:
  - to:
    - ipBlock:
        cidr: 172.31.0.0/16  # Replace with your AWS VPC CIDR for egress traffic

Advanced Features:

  • Layer 3/4 Filtering: Network Policies can filter traffic based on IP addresses and ports, providing control over which pods can communicate with each other.

  • Custom Selectors: Besides pod labels, you can use other selectors like namespaces and service accounts to define policies.

  • Policy Combinations: You can combine multiple Network Policies to enforce complex security rules, such as allowing ingress from specific pods while restricting egress to certain external endpoints.


4. DNS in Kubernetes

Role of DNS in Kubernetes

DNS (Domain Name System) in Kubernetes plays a crucial role in service discovery and pod-to-service communication. Kubernetes includes a built-in DNS server that automatically assigns DNS names to services and pods, allowing them to communicate using human-readable names rather than IP addresses.

Key Features:

  • Service Discovery: Kubernetes DNS automatically assigns a DNS name to each service, which can be resolved within the cluster. This simplifies inter-service communication and ensures that services can be easily discovered by other pods.

  • Pod-to-Service Communication: Pods can easily connect to services using their DNS names without worrying about the underlying IP addresses, which can change as pods are rescheduled or scaled.

  • Cluster DNS: Kubernetes typically runs a DNS server (such as CoreDNS) as a cluster add-on, providing DNS resolution for services and pods within the cluster.

Example of DNS Naming:

For a service named my-service in the default namespace, the DNS name would be:

my-service.default.svc.cluster.local

Advanced DNS Features:

  • Custom DNS Entries: Kubernetes allows you to define custom DNS entries using ConfigMap for more complex scenarios, such as overriding the default DNS resolution behavior.

  • DNS Policies: You can configure DNS policies to control how DNS queries are handled by pods, such as sending all queries to the cluster DNS or using an external DNS resolver.


5. Container Network Interface (CNI) Plugins

What is CNI?

CNI (Container Network Interface) is a specification that defines how network interfaces should be configured in Linux containers. Kubernetes uses CNI plugins to manage the networking of pods, enabling them to communicate with each other within the cluster and with external networks.

Why Use CNI Plugins?

  • Modularity: CNI plugins provide a modular way to extend Kubernetes networking capabilities. You can choose a plugin that best fits your use case, whether you need simple networking, advanced network policies, or integration with cloud providers.

  • Customization: CNI plugins allow you to customize the networking setup, such as choosing between different network topologies, implementing security policies, or optimizing network performance.

Popular CNI Plugins:

  • Calico:

    • Features: Provides advanced network policy enforcement, IP address management, and high-performance networking. Calico is often chosen for production environments due to its robustness and flexibility.

    • Use Case: Ideal for users who need fine-grained control over network policies and performance.

  • Flannel:

    • Features: A simple overlay network provider that is easy to set up and use. Flannel is lightweight and straightforward, making it a good choice for simpler networking needs.

    • Use Case: Suitable for smaller clusters or development environments where simplicity and ease of use are prioritized.

  • Weave:

    • Features: Offers a network mesh that doesn’t require external dependencies and supports multicast traffic. Weave also provides network policy enforcement and encryption for added security.

    • Use Case: Useful for environments where you need an easy-to-use network mesh with built-in security features.

  • Cilium:

    • Features: Focuses on security, using eBPF (extended Berkeley Packet Filter) to enforce network policies at the kernel level. Cilium is known for its high performance and scalability.

    • Use Case: Ideal for large-scale deployments where security and performance are critical.

Example of Installing Calico as a CNI Plugin:

kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml

Advanced CNI Features:

  • Multi-Network Support: Some CNI plugins support multiple network interfaces per pod, enabling complex networking setups such as service chaining or dedicated management networks.

  • Custom IP Address Management: CNI plugins can be configured to use custom IP address ranges, allowing better integration with existing network infrastructure.

  • Integration with Network Policies: Many CNI plugins integrate with Kubernetes


Kubernetes Ingress Controller on Minikube Cluster


💡
If you need help or have any questions, just leave them in the comments! 📝 I would be happy to answer them!
💡
If you found this post useful, please give it a thumbs up 👍 and consider following for more helpful content. 😊

Thank you for taking the time to read! 💚

1
Subscribe to my newsletter

Read articles from Rajat Chauhan directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rajat Chauhan
Rajat Chauhan

Rajat Chauhan is a skilled Devops Engineer, having experience in automating, configuring, deploying releasing and monitoring the applications on cloud environment. • Good experience in areas of DevOps, CI/CD Pipeline, Build and Release management, Hashicorp Terraform, Containerization, AWS, and Linux/Unix Administration. • As a DevOps Engineer, my objective is to strengthen the company’s applications and system features, configure servers and maintain networks to reinforce the company’s technical performance. • Ensure that environment is performing at its optimum level, manage system backups and provide infrastructure support. • Experience working on various DevOps technologies/ tools like GIT, GitHub Actions, Gitlab, Terraform, Ansible, Docker, Kubernetes, Helm, Jenkins, Prometheus and Grafana, and AWS EKS, DevOps, Jenkins. • Positive attitude, strong work ethic, and ability to work in a highly collaborative team environment. • Self-starter, Fast learner, and a Team player with strong interpersonal skills • Developed shell scripts (Bash) for automating day-to-day maintenance tasks on top of that have good python scripting skills. • Proficient in communication and project management with good experience in resolving issues.