Service Boundaries: Kubernetes NetworkPolicy Basics


By default, Kubernetes is wide open. Of course you knew that already. Any pod can talk to any other pod, in any namespace, on any port. That makes life easy for anyone putting an app into prod, and just as easy for anyone who compromises one workload. Once they’re in, nothing stops them from laterally probing every service in the cluster.
So I went down the path of figuring out how to build meaningful guardrails. The answer is service boundaries. Sounds complicated, but it really comes down to network policies. I’d heard about them, and I’d messed with CNIs like Calico and Cilium while setting up clusters, but hadn’t gone deep on what those policies could actually enforce.
That naturally led to the fact that you need policies that describe which pods should be talking to which other pods, and on what ports. Everything else gets dropped. The built-in tool for this is NetworkPolicy
. With a few YAML manifests, you can flip a cluster from “anyone can connect to anything” into “deny by default, allow only what we mean.”
This is the start of a three-part series on service boundaries in Kubernetes:
- Part 1: Native
NetworkPolicy
for baseline L3/L4 segmentation. - Part 2: Scaling boundaries with Calico’s global defaults and external allowlists.
- Part 3: Intent-aware controls with Cilium and Hubble for L7 enforcement.
But how does a NetworkPolicy
really work? These guardrails operate at the network (L3) and transport (L4) layers. In practice that means you’re defining which pod groups (by label/namespace) can connect to which other pods (L3: IP/addressing), and on which ports and protocols (L4: TCP/UDP). It’s the foundation for segmentation, not yet looking inside the traffic itself, just deciding who’s allowed to talk and what ports they can use. Later in the series we’ll climb up the stack into application-aware (L7) controls, but this post is about getting the baseline right at L3/L4.
We’ll start here with the basics: a three-tier app (frontend → backend → database), a default-deny posture, and a small set of explicit allows to make it work. Along the way we’ll pick up lessons, gotchas, and maybe a few regrets from trial and error.
The Test App (Deploy + Baseline Verification)
Let's dive into our simple three-tier demo. Nothing fancy — just enough to show how service boundaries play out in practice:
- Frontend namespace: a web pod (nginx or a tiny app) labeled
app=web
- Backend namespace: an API pod labeled
app=api
- DB namespace: a PostgreSQL pod labeled
app=postgres
Traffic flow:
frontend:web ---> backend:api ---> db:postgres
Goal boundaries:
- Frontend → Backend on 80/TCP only
- Backend → DB on 5432/TCP only
- DNS egress allowed everywhere
- Everything else: blocked
Deploy the demo app (single manifest)
This will give you everything you need: namespaces, workloads, and services. Save as test-app.yaml
and apply with kubectl apply -f test-app.yaml
.
apiVersion: v1
kind: Namespace
metadata:
name: frontend
labels:
tier: frontend
---
apiVersion: v1
kind: Namespace
metadata:
name: backend
labels:
tier: backend
---
apiVersion: v1
kind: Namespace
metadata:
name: db
labels:
tier: db
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
namespace: frontend
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx:1.27-alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: web
namespace: frontend
spec:
selector:
app: web
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
namespace: backend
labels:
app: api
spec:
replicas: 1
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: nginx
image: nginx:1.27-alpine
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: api
namespace: backend
spec:
selector:
app: api
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
namespace: db
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:15-alpine
env:
- name: POSTGRES_PASSWORD
value: pass
ports:
- containerPort: 5432
---
apiVersion: v1
kind: Service
metadata:
name: postgres
namespace: db
spec:
selector:
app: postgres
ports:
- name: pg
port: 5432
targetPort: 5432
protocol: TCP
After a minute or two, you should have:
web.frontend.svc.cluster.local
(HTTP 80)api.backend.svc.cluster.local
(HTTP 80)postgres.db.svc.cluster.local
(TCP 5432)
Test pods (netshoot) for quick verification
netshoot is a Docker Networking Trouble-shooting Swiss-Army Container, so useful for this exercise
Let's see what the default behavior is. We want to make sure everything is connected. We'll run a temporary shell in each namespace and test commands in each namespace:
From frontend shell:
matt@controlplane:~/np$ kubectl run -n frontend test --image=nicolaka/netshoot -it --rm -- bash
If you don't see a command prompt, try pressing enter.
test:~# curl -sS http://api.backend.svc.cluster.local:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
test:~#
From backend shell:
matt@controlplane:~/np$ kubectl run -n backend test --image=nicolaka/netshoot -it --rm -- bash
If you don't see a command prompt, try pressing enter.
test:~# nc -vz postgres.db.svc.cluster.local 5432
Connection to postgres.db.svc.cluster.local (10.110.205.214) 5432 port [tcp/postgresql] succeeded!
test:~#
From anywhere:
test:~# dig +short google.com #google ip
142.251.46.206
test:~#
Cool. With no NetworkPolicies, these will all work. Of course, the goal is to not have everything work. Let's get that process going.
Default-Deny Everything
The first rule of any security posture: deny, deny, deny. Ok maybe those are three rules, but you get the point.
The first rule of network policy: flip the cluster from “allow all” to “deny by default.”
We've established the Kubernetes default configuration allows every pod to talk to every other pod. To change that, we apply a very basic network policy with an empty podSelector
(which matches all pods in the namespace) and no rules. That blocks all ingress and egress.
Here’s a default-deny you can drop into each namespace. Just save in a single file called deny-policy.yaml
.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: frontend
spec:
podSelector: {}
policyTypes: ["Ingress", "Egress"]
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: backend
spec:
podSelector: {}
policyTypes: ["Ingress", "Egress"]
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: db
spec:
podSelector: {}
policyTypes: ["Ingress", "Egress"]
Walkthrough
- Apply file via
kubectl apply -f deny-policy.yaml
to create the relevant network policies. - Every pod in those namespaces will now be isolated, no incoming or outgoing connections.
- DNS lookups will also break, since egress is blocked by default.
Quick Test
Spin up a netshoot
pod in the frontend
namespace and try some basics:
kubectl run -n frontend test --image=nicolaka/netshoot -it --rm -- bash
# Inside the pod:
curl http://api.backend.svc.cluster.local:80
dig google.com
Both of these should now fail. Sadly, we've gone from no segmentation to deny everything. Not exactly helpful. But from here, we’ll add back the minimum connections the app needs to function.
One thing we’re not doing here is deleting the default-deny policy. That remains our baseline. Every new rule we add (like our soon to come DNS carve-out) is layered on top of the default deny. Think of it as our safety blanket.
Allow DNS Egress
Once we flipped everything to default-deny, out first casualty was DNS lookups stopped working. That’s expected, since every pod in frontend
, backend
, and db
is now cut off from making any outbound connection, including the very boring-but-essential queries to the cluster DNS service. Even a simple dig google.com
from your netshoot pods fails.
Why DNS matters to the app:
- Service discovery. Pods usually talk to each other by service names (
api.backend.svc.cluster.local
), not IPs. Without DNS, those names don’t resolve and your “frontend → backend” call breaks. - External calls. If a pod talks to anything outside the cluster (API, S3, etc.), it resolves by name first. No DNS = instant failure.
- Certs & health checks. TLS handshakes and readiness probes often rely on hostnames. Break DNS and you’ll see flaky startups or cert errors.
So we explicitly allow egress only to the cluster DNS service (CoreDNS/kube-dns in kube-system
) on UDP/TCP 53. This does not open general internet egress; it simply lets pods ask, “what IP is api.backend.svc.cluster.local
?” and go back to being productive.
Here’s a allow dns you can drop into each namespace. Just save in a single file called dns-networkpolicy.yaml
.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: frontend
spec:
podSelector: {}
policyTypes: ["Egress"]
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: backend
spec:
podSelector: {}
policyTypes: ["Egress"]
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-dns-egress
namespace: db
spec:
podSelector: {}
policyTypes: ["Egress"]
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
- protocol: TCP
port: 53
I am not 100% sure k8s-app=kube-dns
always works. But this works on my kubeadm cluster with Calico CNI.
Walkthrough
- Apply file via
kubectl apply -f dns-networkpolicy.yaml
to create the relevant network policies. - This doesn’t allow full internet egress, just DNS queries to
kube-dns
. - Now you can run
dig google.com
from your netshoot pods and get a valid response again.
Quick Test
matt@controlplane:~/np$ kubectl run -n frontend test --image=nicolaka/netshoot -it --rm -- bash
If you don't see a command prompt, try pressing enter.
test:~# dig +short google.com
142.250.176.14
test:~#
Cool, works as we want. With DNS restored, your apps can resolve service names and external domains, but all other connections are still blocked. Next we’ll add back the actual service-to-service flows that make the three-tier app work.
Allow Service-to-Service Flows
With DNS back in place, pods can at least resolve names again, but traffic is still at a stop. That’s exactly what we want: default-deny baseline plus a single DNS carve-out. Now it’s time to add back the flows that actually make our three-tier app work.
Frontend → Backend
Our frontend pods need to call the backend API on TCP 80. That means we have to allow two directions:
- Egress from the frontend pods to the
backend
namespace on port 80. - Ingress into the backend pods, but only from the
frontend
namespace and only on that port.
Here’s an allow frontend to backend you can drop into each namespace. Just save in a single file called front-to-back-networkpolicy.yaml
.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-egress-to-backend
namespace: frontend
spec:
podSelector: {} # or matchLabels: {app: web} if you want to scope to just the web pods
policyTypes: ["Egress"]
egress:
- to:
- namespaceSelector:
matchLabels:
tier: backend
podSelector:
matchLabels:
app: api
ports:
- protocol: TCP
port: 80
---
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-frontend-to-backend
namespace: backend
spec:
podSelector:
matchLabels:
app: api
ingress:
- from:
- namespaceSelector:
matchLabels:
tier: frontend
ports:
- protocol: TCP
port: 80
Backend → Database
Next, backend pods need to talk to Postgres on TCP 5432. Just like with frontend → backend, that means two pieces:
- Egress from the backend pods to the
db
namespace on port 5432. - Ingress into the db pods, but only from the
backend
namespace and only on that port.
Here’s a allow backend to db you can drop into each namespace. Just save in a single file called back-to-db-networkpolicy.yaml
.
# Egress from backend → db
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-backend-egress-to-db
namespace: backend
spec:
podSelector: {}
policyTypes: ["Egress"]
egress:
- to:
- namespaceSelector:
matchLabels:
tier: db
podSelector:
matchLabels:
app: postgres
ports:
- protocol: TCP
port: 5432
---
# Ingress into db from backend
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-backend-to-db
namespace: db
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- namespaceSelector:
matchLabels:
tier: backend
ports:
- protocol: TCP
port: 5432
Walkthrough
- Apply these policies on top of the default-deny and DNS rules.
- Frontend → Backend on port 80 should now succeed.
- Backend → DB on port 5432 should now succeed.
- Any other cross-namespace attempt (like frontend → db or db → backend) still fails.
Quick Test
From frontend shell:
matt@controlplane:~/np$ kubectl run -n frontend test --image=nicolaka/netshoot -it --rm -- bash
If you don't see a command prompt, try pressing enter.
test:~# curl -sS http://api.backend.svc.cluster.local:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
test:~#
From backend shell:
matt@controlplane:~/np$ kubectl run -n backend test --image=nicolaka/netshoot -it --rm -- bash
If you don't see a command prompt, try pressing enter.
test:~# nc -vz postgres.db.svc.cluster.local 5432
Connection to postgres.db.svc.cluster.local (10.110.205.214) 5432 port [tcp/postgresql] succeeded!
test:~#
At this point we’ve re-enabled just enough traffic for the app to function: frontend → backend → db, plus DNS everywhere. Everything else remains blocked. That’s baseline L3/L4 segmentation in action.
Now if you doubted me on the DNS thing, just delete that policy and try front to backend. Good luck.
What We Just Built
Now let's step back for a second. We started with a cluster that was flat and wide open: every pod could talk to every other pod, in every namespace, on every port. That’s the default state of Kubernetes networking, convenient but quite insecure.
Now look at where we are:
- Default-deny baseline: nothing moves unless we say so.
- DNS carve-out: pods can still resolve service names and external hosts, but nothing else is open-ended.
- Frontend → Backend on :80: the app’s public entry point can reach the API tier, and that’s it.
- Backend → DB on :5432: the API tier can query the database, but it’s walled off from everything else.
- Everything else blocked: no random cross-namespace chatter, no sneaky egress to the internet.
What we’ve really built here is a 3-hop app chain: frontend → backend → database, with DNS as the plumbing. Instead of a spaghetti mess of possible connections, the graph collapses down to just the flows the app is supposed to have.
This is least privilege at L3/L4. And it is dead simple, no service mesh required. Just a handful of manifests that take Kubernetes from “anyone can talk to anyone” to “only these three things can talk, on these two ports.” Not bad.
Lateral Movement, Blocked
So we get a nice win. Without policies, landing in the frontend provides the run of the cluster. Curl into the backend, hop into the database, and keep poking at other namespaces until something breaks. That's on us, not Kubernetes.
With our policies in place, the world just got a lot smaller:
- In frontend, you can only send traffic to backend’s API service on port 80. No database, no random namespaces, no internet egress.
- In backend, you can only reach Postgres on port 5432. No shortcut to frontend, no talking to other services.
- The db tier is a walled garden. It only listens to backend, and that’s it.
Every other path is cut off. We’ve shrunk the surface area from “everything-to-everything” down to a single three-hop chain. Peace out, lateral movement.
What’s Next (Scaling with Calico)
That’s the baseline: Kubernetes NetworkPolicy
gave us simple, effective service boundaries at the L3/L4 level. It works. But what happens when you’re running dozens of namespaces? How do you enforce organization-wide defaults without copy-pasting YAML everywhere? Admission controller sure (oh yeah I wrote about that), but we shouldn't need it for everything Kubernetes.
That’s where Calico comes in. In Part 2, we’ll take this same model and scale it with Calico’s GlobalNetworkPolicies, NetworkSets, and built-in flow logs. It’s the same idea of least privilege, but with tools designed to handle more than a three-tier demo app.
Stay tuned loyal reader.
Subscribe to my newsletter
Read articles from Matt Brown directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Matt Brown
Matt Brown
Working as a solutions architect while going deep on Kubernetes security — prevention-first thinking, open source tooling, and a daily rabbit hole of hands-on learning. I make the mistakes, then figure out how to fix them (eventually).