Scenario Question #2: Troubleshooting Kubernetes NetworkPolicy—A Complete Step-by-Step Guide

Navya ANavya A
5 min read

When working with Kubernetes, connectivity issues between services and pods are a common source of headaches, especially in environments that use NetworkPolicies for security. This detailed blog post guides you through a real-world scenario similar to those you’ll face on the CKA exam or in production—where network access is broken due to an overly strict NetworkPolicy.

The Scenario

Question:
A deployment named web in the namespace v8xoqe is exposed via a service, also named web. However, requests from the Internet do not reach the deployment's pods. You are not allowed to delete any resources.
You suspect a NetworkPolicy issue. How do you troubleshoot, diagnose, and resolve this while following best practices?

Why NetworkPolicies Can Break Connectivity

Kubernetes NetworkPolicies allow you to control ingress (incoming) and egress (outgoing) traffic at the pod level. While these are excellent for security, a misconfiguration or a policy that’s too restrictive can unintentionally block all traffic—even when your pods, services, and endpoints otherwise look correct.

Step 1: Establish the Baseline

Quickly verify that your basic setup is functional before blaming NetworkPolicies:

  • Check the Service type and endpoints:

      kubectl get svc web -n v8xoqe
      kubectl get endpoints web -n v8xoqe
    
    • Make sure the service type is LoadBalancer or NodePort and that endpoints are populated.
  • Check Pod status:

      kubectl get pods -n v8xoqe -o wide
    
    • Ensure the pods are Running and Ready.

If all of these are correct, but connectivity is still broken, proceed to NetworkPolicy analysis.

Step 2: List and Examine NetworkPolicies

  • List all policies in the namespace:

      kubectl get networkpolicies -n v8xoqe
    
  • Describe any present policies:

      kubectl describe networkpolicy <policy-name> -n v8xoqe
    

    or for the full YAML:

      kubectl get networkpolicy <policy-name> -n v8xoqe -o yaml
    

What to look for:

  • The podSelector field (which pods are affected)

  • The presence and details of any ingress and policyTypes rules

Step 3: Identify the Problematic NetworkPolicy

Here is an example of a wrong (problematic) policy you might find:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: network-policy
  namespace: v8xoqe
spec:
  podSelector: {}
  policyTypes:
    - Ingress

Explanation:

  • podSelector: {}: selects all pods in the namespace.

  • policyTypes: [Ingress] with no ingress rules means all inbound traffic is denied—this is a "default deny all ingress" policy.

  • This is great for security, but unless you add explicit allow rules, no external or internal service can connect to your pods.

Step 4: Confirm Impact

  • All pods in v8xoqe are now blocking all ingress traffic unless another policy allows the traffic specifically.

  • Test from inside and outside the cluster. Pods are unreachable regardless of whether you use a NodePort, LoadBalancer, or direct cluster traffic.

Step 5: Craft and Apply the Corrected NetworkPolicy

The solution is to explicitly allow ingress traffic to the required pods and ports (without deleting the default deny policy).

Sample Corrected Policy to Allow All Ingress to Web Pods:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-web-ingress
  namespace: v8xoqe
spec:
  podSelector:
    matchLabels:
      app: web            # Use the actual label key and value of your pods
  policyTypes:
    - Ingress
  ingress:
    - {}                 # Allows ALL ingress traffic
  • Save it as allow-web-ingress.yaml and apply:

      kubectl apply -f allow-web-ingress.yaml
    
  • Or, if editing via kubectl edit, add the ingress: section with the empty object to your existing policy and save.

Want more fine-grained security? Instead of an empty ingress, restrict to required ports (e.g., 80/443 for HTTP/HTTPS):

ingress:
  - ports:
      - protocol: TCP
        port: 80
      - protocol: TCP
        port: 443

Step 6: Verify Restoration of Access

  • Retry access from outside the cluster.

  • Check the endpoints are present (they should be selected) and pods respond.

      kubectl get endpoints web -n v8xoqe
    
  • Use describe networkpolicy commands to see the effect.

Step 7: Document and Review for CKA Success

  • Document what the original (wrong) policy did (blocked all ingress), and what your corrected policy achieves (explicitly allows ingress to web pods).

  • Remember: never delete a default deny policy in CKA or real production. Always supplement it with precise allow rules.

Wrong Vs. Correct NetworkPolicy—Side-by-Side

Wrong Policy (Blocks All)Correct Policy (Allows Ingress)
podSelector{} (all pods){ matchLabels: { app: web } } (only web pods)
policyTypes[Ingress][Ingress]
ingressnot present{} (allows all) or port-specific as needed
EffectNo incoming traffic reaches any podWeb pods accessible as required

Troubleshooting Checklist

  • Always start by validating service, endpoints, and pod health.

  • List and describe all NetworkPolicies in the namespace. Any with an empty pod selector and no ingress rules will block everything.

  • Add (don't remove) specific allow policies to restore required access, and test rigorously after each change.

  • Review selectors and ports for least-privilege configuration.

  • Write a summary of what was broken, how you found it, and what policy you added or changed to restore service.

Takeaways

  • NetworkPolicies are powerful, but a blank podSelector and empty ingress rule is a "trapdoor" for connectivity.

  • Restore service securely by adding precise allow rules targeting only the necessary pods and ports—never by deleting security policies.

  • This workflow will both help you succeed in CKA and build real-world security troubleshooting confidence in Kubernetes environments.

0
Subscribe to my newsletter

Read articles from Navya A directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Navya A
Navya A

👋 Welcome to my Hashnode profile! I'm a passionate technologist with expertise in AWS, DevOps, Kubernetes, Terraform, Datree, and various cloud technologies. Here's a glimpse into what I bring to the table: 🌟 Cloud Aficionado: I thrive in the world of cloud technologies, particularly AWS. From architecting scalable infrastructure to optimizing cost efficiency, I love diving deep into the AWS ecosystem and crafting robust solutions. 🚀 DevOps Champion: As a DevOps enthusiast, I embrace the culture of collaboration and continuous improvement. I specialize in streamlining development workflows, implementing CI/CD pipelines, and automating infrastructure deployment using modern tools like Kubernetes. ⛵ Kubernetes Navigator: Navigating the seas of containerization is my forte. With a solid grasp on Kubernetes, I orchestrate containerized applications, manage deployments, and ensure seamless scalability while maximizing resource utilization. 🏗️ Terraform Magician: Building infrastructure as code is where I excel. With Terraform, I conjure up infrastructure blueprints, define infrastructure-as-code, and provision resources across multiple cloud platforms, ensuring consistent and reproducible deployments. 🌳 Datree Guardian: In my quest for secure and compliant code, I leverage Datree to enforce best practices and prevent misconfigurations. I'm passionate about maintaining code quality, security, and reliability in every project I undertake. 🌐 Cloud Explorer: The ever-evolving cloud landscape fascinates me, and I'm constantly exploring new technologies and trends. From serverless architectures to big data analytics, I'm eager to stay ahead of the curve and help you harness the full potential of the cloud. Whether you need assistance in designing scalable architectures, optimizing your infrastructure, or enhancing your DevOps practices, I'm here to collaborate and share my knowledge. Let's embark on a journey together, where we leverage cutting-edge technologies to build robust and efficient solutions in the cloud! 🚀💻