Control Issues: Tales of Kubernetes Admission

Matt BrownMatt Brown
11 min read

You pop open your detection dashboard and see a pod spawning /bin/bash, reading /etc/shadow, maybe even curling a crypto miner for good measure. Runtime security caught it. Crisis averted? Except... why did that pod get scheduled in the first place?

Kubernetes didn’t flag it. The control plane didn’t stop it. Because unless you tell it not to, Kubernetes will happily let anything in: pods that run as root, mount the host filesystem, skip seccomp, talk to the Kubernetes API... whatever. It’s a permissive API that trusts its users, which is great for everyone except security.

This series is about using Kubernetes’ own capabilities to stop the bad stuff from ever starting and using Kyverno to make it dead simple. Admission control isn’t glamorous, but it’s one of the coolest layers of Kubernetes security.


Kubernetes Defaults Are Not Your Friend

Start with a vulnerable image. Maybe it’s something you built quick, maybe it’s ubuntu:latest with a few tools thrown in. It’s running as root, no entrypoint hardening, no seccomp profile, etc.

Kubernetes will take that image and run it exactly as-is. No warnings. No checks. Just a pod in a healthy, Running state.

Then you write a spec. It includes:

  • runAsUser: 0, or nothing at all
  • hostNetwork: true
  • A hostPath mount to /var/run/docker.sock
  • Maybe even privileged: true, because that fixed something once

Just kidding, no one would write that. But in case it was, Kubernetes shrugs. It doesn’t care, which is the proverbial double-edged sword. Speed vs security.

And by the time your tools detect something sketchy, you’ve already granted access. The pod was built to be dangerous. The spec let it happen. There was never a checkpoint. No pre-flight safety net. Because by default, Kubernetes assumes you meant to do that.

That’s what admission control is for. Not to catch mistakes after they cause problems, but to stop them before they land in the cluster at all.


What Actually Happens When You Apply a Pod

When you run kubectl apply, it has always made me feel like Kubernetes just takes your YAML and makes it happen. But under the hood of course, the API server calls the shots. And it doesn’t trust your YAML blindly. Not entirely.

Instead, every object you submit passes through a set of admission controllers: little gatekeeper modules that can inspect, reject, or even modify the object before it's persisted. It's actually not that complicated They sit between the authentication/authorization stage and the cluster.

Kubernetes has two kinds of dynamic admission controllers we'll cover:

  • Mutating admission webhooks — These can rewrite objects before they’re stored (e.g., add labels).
  • Validating admission webhooks — These can block an object from being accepted (e.g., reject pods running as root).

These custom webhooks use a shared format: an AdmissionReview request/response payload. Your webhook receives the Kubernetes object (like a Pod), makes a decision, and responds with allowed: true or false.

That is simple, but it unlocks serious power: you can define your own policies using any language, framework, or logic you want. Want to reject any container with “bad” in the name? You can do that. Want to deny hostPath volumes unless they mount /tmp? Go for it.

But there's a catch: these webhooks need to be deployed as actual services, reachable over HTTPS, and registered with a ValidatingWebhookConfiguration or MutatingWebhookConfiguration. So while it's flexible, it's not exactly frictionless.

Here’s roughly how the flow works. Yes not elegant, but mostly right I think:

kubectl → API Server → [Authentication → Authorization]
                                ↓
               [Mutating Admission Webhooks (optional)]
                                ↓
               [Validating Admission Webhooks (optional)]
                                ↓
                       Scheduler / Controllers

And if you don’t register any admission webhooks? Kubernetes just shrugs and lets your pod through. Sounds about like my cats watching an intruder.


Unnecessarily Writing a Validating Admission Webhook by Hand

Let’s cut to it: this is the part where you build the thing Kubernetes will call when someone tries to create a Pod. Another unnecessary, but useful DIY project. That means:

  • Listening on /validate
  • Accepting a specific JSON format (AdmissionReview)
  • Returning a valid JSON response (allowed: true or false)
  • Doing it all over HTTPS

We’re keeping it lightweight: Python + Flask + self-signed certs. Don’t worry, we can look at Cert-Manager in a later post. But hey, at least I avoided baking them into the image.

You can also just see it all in the repo. I cannot guarantee it will work for you, nor guarantee it will work for me repeatably.

Step 1: Your Flask Server

Here’s a minimal webhook server that does the following:

  • Accepts HTTPS POST requests from the Kubernetes API server.
  • Parses AdmissionReview payloads.
  • Denies pods with suspicious names like 'badpod'.
  • Returns valid AdmissionReview responses.
from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/validate', methods=['POST'])
def validate():
    request_info = request.get_json()
    pod_name = request_info['request']['object']['metadata']['name']

    if 'badpod' in pod_name:
        return jsonify({
            "response": {
                "uid": request_info['request']['uid'],
                "allowed": False,
                "status": {
                    "message": f"Pod name '{pod_name}' is not allowed."
                }
            }
        })
    else:
        return jsonify({
            "response": {
                "uid": request_info['request']['uid'],
                "allowed": True
            }
        })

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=443, ssl_context=('certs/cert.pem', 'certs/key.pem'))

If you are interested in a brief walkthrough:

from flask import Flask, request, jsonify

app = Flask(__name__)

@app.route('/validate', methods=['POST'])
def validate():
    request_info = request.get_json()
    pod_name = request_info['request']['object']['metadata']['name']
  • Creates a basic Flask web application.
  • The /validate route is defined to handle POST requests (as expected by Kubernetes).
  • Parses the incoming AdmissionReview JSON payload.
  • Extracts the pod name from the incoming spec.
    if 'badpod' in pod_name:
        return jsonify({
            "response": {
                "uid": request_info['request']['uid'],
                "allowed": False,
                "status": {
                    "message": f"Pod name '{pod_name}' is not allowed."
                }
            }
        })
        else:
            return jsonify({
                "response": {
                    "uid": request_info['request']['uid'],
                    "allowed": True
                }
         })
  • If the pod name includes 'badpod', the webhook denies the request and returns a message explaining why.
  • If the name does not include 'badpod', the webhook allows the request.
if __name__ == '__main__':
    app.run(host='0.0.0.0', port=443, ssl_context=('certs/cert.pem', 'certs/key.pem'))
  • Starts the Flask server with TLS enabled since Kubernetes admission webhooks must be served over HTTPS.
  • Uses local self-signed certs for simplicity.

Step 2: Build and Push the Image

Use this Dockerfile:

FROM python:3.11-slim

WORKDIR /app

COPY server/requirements.txt .
RUN pip install -r requirements.txt

COPY server .

CMD ["python", "app.py"]

And your requirements.txt is just:

flask

Build it and change this of course to your preferred repo. Or use it totally local, which is always a pain for me.

docker build -t docker.io/your-dockerhub-username/webhook-server:latest .
docker push docker.io/your-dockerhub-username/webhook-server:latest

Step 3: Generate Self-Signed Certs

You can use a simple generate-certs.sh script to:

  • Create a CA
  • Create a server cert with proper DNS entries
  • Output the base64-encoded CA cert for the webhook configuration

See the script in the repo.

Step 4: Create Secret

Create the secret using our created cert:

kubectl create secret generic webhook-certs \
  --from-file=cert.pem=server/cert.pem \
  --from-file=key.pem=server/key.pem \
  -n default

Step 5: Deploy to Your Cluster

You’ll need:

  1. A Kubernetes Deployment that mounts your TLS certs from a Secret
  2. A Service that fronts the webhook on port 443
  3. A ValidatingWebhookConfiguration that:
    • Targets pods on CREATE
    • Points to your service
    • Includes your base64-encoded CA cert
    • Uses admissionReviewVersions: ["v1"]

If you skip any of these? You’ll likely hit InternalError, expected AdmissionReview, or no webhook hits at all.

Here it is all together:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webhook
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: webhook
  template:
    metadata:
      labels:
        app: webhook
    spec:
      containers:
        - name: webhook
          image: <your-username>/webhook-server:latest
          imagePullPolicy: IfNotPresent
          ports:
            - containerPort: 443
          volumeMounts:
            - name: tls-certs
              mountPath: /app/certs
              readOnly: true
      volumes:
        - name: tls-certs
          secret:
            secretName: webhook-certs

---

apiVersion: v1
kind: Service
metadata:
  name: webhook
  namespace: default
spec:
  selector:
    app: webhook
  ports:
    - port: 443
      targetPort: 443

---

apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
  name: deny-badpod-webhook
webhooks:
  - name: deny.badpod.webhook.dev
    rules:
      - apiGroups: [""]
        apiVersions: ["v1"]
        operations: ["CREATE"]
        resources: ["pods"]
    clientConfig:
      service:
        name: webhook
        namespace: default
        path: /validate
        port: 443
      caBundle: <REPLACE_WITH_BASE64_CA>
    admissionReviewVersions: ["v1"]
    sideEffects: None
    failurePolicy: Fail

Step 6: Test a Bad Pod

Try to create a pod via bad-pod.yaml:

apiVersion: v1
kind: Pod
metadata:
  name: badpod-test
spec:
  containers:
    - name: main
      image: busybox
      command: ["sleep", "3600"]

And you should see:

kubectl apply -f bad-pod.yaml
Error from server: error when creating "bad-pod.yaml": admission webhook "deny.badpod.webhook.dev" denied the request: Pod name 'badpod-test' is not allowed.

Manual admission webhooks aren’t exactly elegant, but they’re a powerful reminder of how much control Kubernetes gives you if you’re willing to build it yourself. We took the long way here: generating certs, writing a validating webhook from scratch, and configuring it all manually. But now you’ve seen exactly how Kubernetes evaluates incoming requests and what an actual “deny” looks like in practice.

This isn’t how I would run things in production, but hey at least it gives you the context to understand tools like Kyverno, Gatekeeper, and even Pod Security Admission. They’re all built to use this underlying capability.


Pod Security Admission: At Least It's Something

Speaking of Pod Security Admission. Kubernetes includes Pod Security Admission (PSA), a built-in admission controller that evaluates pod specs against predefined Pod Security Standards (privileged, baseline, restricted).

It’s not perfect, but it’s better than nothing. And definitely better than our DIY project. And it doesn’t require you to run your own admission server. You can enable it cluster-wide or per-namespace. If you're just trying to avoid common pitfalls like privileged pods, hostPath mounts, or running as root, it’s a good place to start.

But one of the main problems is its rigidity. You won’t get custom logic like our Flask webhook example. It would be almost impossible to rationalize applying the set-in-stone Pod Security Standards. Furthermore, it has very weak exemption capabilities with just usernames, runtimeclasses, and namespaces.

It is not set by default in a kubeadm cluster, so let’s go ahead and enable it.

PSA Setup and Test

Step 1: Create an admission control config file

Create admission-control.yaml and place it in /etc/kubernetes/audit. Using privileged as default means nothing is affected since privileged essentially means nothing is done.

# /etc/kubernetes/audit/admission-control.yaml
apiVersion: apiserver.config.k8s.io/v1
kind: AdmissionConfiguration
plugins:
  - name: PodSecurity
    configuration:
      apiVersion: pod-security.admission.config.k8s.io/v1
      kind: PodSecurityConfiguration
      defaults:
        enforce: "privileged"
        enforce-version: "latest"
        audit: "privileged"
        audit-version: "latest"
        warn: "privileged"
        warn-version: "latest"
      exemptions:
        usernames: []
        runtimeClasses: []
        namespaces: []

I found that when I created it in /etc/kubernetes it would fail. Luckily it didn't take me too long to put together that the mounts were missing for that directory. So I just stuffed it into the existing audit folder. This is good enough for a test, but wouldn't work well for an actual, live cluster.

Step 2: Mount this file into your API server pod

In kubeadm, this means updating the static pod manifest. So edit /etc/kubernetes/manifests/kube-apiserver.yaml and add this at the bottom of your flags. As soon as you edit and save it will restart. You have admission control!

- --admission-control-config-file=/etc/kubernetes/audit/admission-control.yaml

Step 3: Create restricted namespace

Create a namespace for testing by running the following:

kubectl create ns psa-restricted
kubectl label ns psa-restricted pod-security.kubernetes.io/enforce=restricted

Because it is running restricted, a pod that doesn't set runAsNotRoot: true will fail.

Step 4: Create Pod (or try)

Ok now give it a test. Create bad-pod.yaml that has nothing security related:

apiVersion: v1
kind: Pod
metadata:
  name: bad-pod
spec:
  containers:
    - name: nginx
      image: nginx

Now run it and you should see it denied. It gives a detailed message which would help explain what is wrong.

```bash
matt@controlplane:~$ kubectl apply -f bad-pod.yaml -n psa-restricted
Error from server (Forbidden): error when creating "bad-pod.yaml": pods "bad-psa-pod" is forbidden: violates PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "nginx" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "nginx" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "nginx" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "nginx" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost")

Brief Thought on PSA

So what did we get?

  • A fully working built-in admission controller.
  • No extra components or sidecars.
  • A taste of the default security levels baked into Kubernetes.

But we also hit its limitations:

  • You can't customize the policies.
  • You're stuck with privileged, baseline, or restricted.
  • Exemptions are very coarse and limited.

Still, if you're looking to put a gate in front of obviously risky pods with zero added complexity, PSA delivers. But yeah maybe not.


Wrap Up

We started with a DIY admission controller, a Flask app that let us define arbitrary logic and inspect pod specs however we wanted. It worked, but came with tradeoffs: we had to write and maintain custom code, stand up an HTTPS server, and deal with the fragility of self-hosted admission logic. Powerful, but not exactly plug-and-play.

Then we pivoted to something simpler: Kubernetes' Pod Security Admission. No code. No sidecars. Just built-in labels and baked-in standards. We set up enforcement for the restricted level and tested it with a clearly noncompliant pod.

Each approach has its place:

  • The custom webhook gives you full control, ideal for specialized logic or environments with unique enforcement needs, but hell to implement.
  • PSA is great if you're trying to raise the floor on security with minimal effort, but it's rigid and limited.

The real takeaway? You need a more complete tool. Something that is customizable and easy. And that’s where we’re headed next: a closer look at how Kyverno can bridge the gap between simplicity and flexibility, without making you host your own Python server.

0
Subscribe to my newsletter

Read articles from Matt Brown directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Matt Brown
Matt Brown

Working as a solutions architect while going deep on Kubernetes security — prevention-first thinking, open source tooling, and a daily rabbit hole of hands-on learning. I make the mistakes, then figure out how to fix them (eventually).