kube-bench: The Posture Check That Time Forgot

Matt BrownMatt Brown
11 min read

I heard someone mention kube-bench the other day, and I had to double-check it still existed.

Turns out it does, but it does seem to be in a maintenance phase. And it’s doing exactly what I remembered (which I used to think was pretty awesome):
Run through the CIS Kubernetes Benchmarks and tell you whether your cluster’s control plane and node config match the hardening guides.

  • No nice operator model. Not even a basic DaemonSet option.
  • No fancy dashboards (maybe that’s a good thing?).
  • And no support for most of the other posture frameworks floating around these days.

Just a local config check run from a container or a job. And the results are just to view the logs, which is just a blunt little report telling you which specs you failed.

But people still use it. Because sometimes, checking boxes is the job, especially when compliance or audit is driving the timeline.

In this post, I’ll break down what good old kube-bench actually does, what it doesn’t, and how it fits into the broader (and more complicated) world of Kubernetes posture management. I’ll also show how to run it in a real cluster, using a CronJob and structured output, so it’s more than just a one-off CLI tool buried in job logs.

Now before we spiral into a debate about whether this belongs in the Trivy Operator or whether that’s better or worse than Kubescape, let’s stay focused. We’re here to talk about kube-bench.


What kube-bench Actually Does

kube-bench is a CLI tool that runs a set of checks based on the various CIS Kubernetes Benchmarks.

So what does it do in practice? It doesn't guess, learn, or interpret. It just reads your:

  • Component flags
  • Config files
  • File permissions
  • Etc.

Then it compares those values to the CIS benchmark for your Kubernetes version and prints out something like:

[INFO] 4 Worker Node Security Configuration
[INFO] 4.1 Worker Node Configuration Files
[FAIL] 4.1.1 Ensure that the kubelet service file permissions are set to 600 or more restrictive (Automated)
[PASS] 4.1.2 Ensure that the kubelet service file ownership is set to root:root (Automated)
[WARN] 4.1.3 If proxy kubeconfig file exists ensure permissions are set to 600 or more restrictive (Manual)
[WARN] 4.1.4 If proxy kubeconfig file exists ensure ownership is set to root:root (Manual)

That’s it.

It supports the expected major components:

  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler
  • etcd
  • kubelet
  • Node-level system config

The checks are defined in YAML files per Kubernetes version and kube-bench parses them in order to decide what to look for and how to grade it.


Running kube-bench in a Real Cluster

You can run kube-bench inside a Kubernetes cluster with minimal effort. This section walks through two practical approaches:

  • Ad hoc usage with a Kubernetes Job
  • Recurring scans using a CronJob with per-node targeting

Option 1: Ad Hoc Run as a Job

If you just want to scan one node and see what kube-bench says for quick results, you can do that with a plain Kubernetes Job. Note that if this is not run on a master node then you'll get less results as a majority of tests don't work on worker nodes.

Minimal kube-bench Job Example

apiVersion: batch/v1
kind: Job
metadata:
  name: kube-bench
spec:
  template:
    metadata:
      labels:
        app: kube-bench
    spec:
      containers:
        - command: ["kube-bench"]
          image: docker.io/aquasec/kube-bench:v0.11.1
          name: kube-bench
          volumeMounts:
            - name: var-lib-cni
              mountPath: /var/lib/cni
              readOnly: true
            - mountPath: /var/lib/etcd
              name: var-lib-etcd
              readOnly: true
            - mountPath: /var/lib/kubelet
              name: var-lib-kubelet
              readOnly: true
            - mountPath: /var/lib/kube-scheduler
              name: var-lib-kube-scheduler
              readOnly: true
            - mountPath: /var/lib/kube-controller-manager
              name: var-lib-kube-controller-manager
              readOnly: true
            - mountPath: /etc/systemd
              name: etc-systemd
              readOnly: true
            - mountPath: /lib/systemd/
              name: lib-systemd
              readOnly: true
            - mountPath: /srv/kubernetes/
              name: srv-kubernetes
              readOnly: true
            - mountPath: /etc/kubernetes
              name: etc-kubernetes
              readOnly: true
            - mountPath: /usr/local/mount-from-host/bin
              name: usr-bin
              readOnly: true
            - mountPath: /etc/cni/net.d/
              name: etc-cni-netd
              readOnly: true
            - mountPath: /opt/cni/bin/
              name: opt-cni-bin
              readOnly: true
      hostPID: true
      restartPolicy: Never
      volumes:
        - name: var-lib-cni
          hostPath:
            path: /var/lib/cni
        - hostPath:
            path: /var/lib/etcd
          name: var-lib-etcd
        - hostPath:
            path: /var/lib/kubelet
          name: var-lib-kubelet
        - hostPath:
            path: /var/lib/kube-scheduler
          name: var-lib-kube-scheduler
        - hostPath:
            path: /var/lib/kube-controller-manager
          name: var-lib-kube-controller-manager
        - hostPath:
            path: /etc/systemd
          name: etc-systemd
        - hostPath:
            path: /lib/systemd
          name: lib-systemd
        - hostPath:
            path: /srv/kubernetes
          name: srv-kubernetes
        - hostPath:
            path: /etc/kubernetes
          name: etc-kubernetes
        - hostPath:
            path: /usr/bin
          name: usr-bin
        - hostPath:
            path: /etc/cni/net.d/
          name: etc-cni-netd
        - hostPath:
            path: /opt/cni/bin/
          name: opt-cni-bin

Run it via:

kubectl apply -f job.yaml

To view the output:

kubectl logs job/kube-bench

Voila. It's useful for quick compliance check, but ephemeral and limited to a single node.

Why hostPID: true and hostPath mounts?

kube-bench needs to read both the running process flags and configuration files of your Kubernetes components to perform many CIS benchmark checks.

hostPID: true lets it inspect host-level processes (e.g., kube-apiserver, etcd, kubelet) and validate how they were started.

hostPath volumes let it access local config directories like /etc/kubernetes and /var/lib/kubelet. Without these, kube-bench can’t do some of the most useful checks.

Option 2: Recurring Scans with a CronJob

If you want regular posture scans (daily, weekly, etc.), you can easily use a Kubernetes CronJob. This example schedules kube-bench to run on a node at a defined interval and sends json output to /var/kube-bench-results/${NODE_NAME}.json, which would be the node name the job ran on.

apiVersion: batch/v1
kind: CronJob
metadata:
  name: kube-bench
spec:
  schedule: "0 3 * * *"  # Run daily at 3am
  jobTemplate:
    spec:
      template:
        metadata:
          labels:
            app: kube-bench
        spec:
          containers:
            - command: ["/bin/sh", "-c"]
              args:
                - "kube-bench --json > /output/${NODE_NAME}.json"
              env:
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
              image: docker.io/aquasec/kube-bench:latest
              name: kube-bench
              volumeMounts:
                - name: var-lib-cni
                  mountPath: /var/lib/cni
                  readOnly: true
                - mountPath: /var/lib/etcd
                  name: var-lib-etcd
                  readOnly: true
                - mountPath: /var/lib/kubelet
                  name: var-lib-kubelet
                  readOnly: true
                - mountPath: /var/lib/kube-scheduler
                  name: var-lib-kube-scheduler
                  readOnly: true
                - mountPath: /var/lib/kube-controller-manager
                  name: var-lib-kube-controller-manager
                  readOnly: true
                - mountPath: /etc/systemd
                  name: etc-systemd
                  readOnly: true
                - mountPath: /lib/systemd/
                  name: lib-systemd
                  readOnly: true
                - mountPath: /srv/kubernetes/
                  name: srv-kubernetes
                  readOnly: true
                - mountPath: /etc/kubernetes
                  name: etc-kubernetes
                  readOnly: true
                - mountPath: /usr/local/mount-from-host/bin
                  name: usr-bin
                  readOnly: true
                - mountPath: /etc/cni/net.d/
                  name: etc-cni-netd
                  readOnly: true
                - mountPath: /opt/cni/bin/
                  name: opt-cni-bin
                  readOnly: true
                - name: results
                  mountPath: /output
          hostPID: true
          restartPolicy: Never
          volumes:
            - name: var-lib-cni
              hostPath:
                path: /var/lib/cni
            - hostPath:
                path: /var/lib/etcd
              name: var-lib-etcd
            - hostPath:
                path: /var/lib/kubelet
              name: var-lib-kubelet
            - hostPath:
                path: /var/lib/kube-scheduler
              name: var-lib-kube-scheduler
            - hostPath:
                path: /var/lib/kube-controller-manager
              name: var-lib-kube-controller-manager
            - hostPath:
                path: /etc/systemd
              name: etc-systemd
            - hostPath:
                path: /lib/systemd
              name: lib-systemd
            - hostPath:
                path: /srv/kubernetes
              name: srv-kubernetes
            - hostPath:
                path: /etc/kubernetes
              name: etc-kubernetes
            - hostPath:
                path: /usr/bin
              name: usr-bin
            - hostPath:
                path: /etc/cni/net.d/
              name: etc-cni-netd
            - hostPath:
                path: /opt/cni/bin/
              name: opt-cni-bin
            - hostPath:
                path: /var/kube-bench-results
                type: DirectoryOrCreate
              name: results

Here's how it works.

            - command: ["/bin/sh", "-c"]
              args:
                - "kube-bench --json > /output/${NODE_NAME}.json"
              env:
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName

Run the kube-bench command with a flag for json output and then output that to ${NODE_NAME}.json file, with NODE_NAME coming via the environment variable. Save this to the output folder.

            - name: results
              mountPath: /output

Use a basic volume mount on the container to point the local output folder to the results hostPath mount results.

            - hostPath:
                path: /var/kube-bench-results
                type: DirectoryOrCreate
              name: results

Set host path mount to use var/kube-bench-results and also create the directory if it's not there.

Since you're not keeping historical posture results, each run just overwrites the last one, effectively giving you a rolling snapshot of your cluster's config state. Not too bad. No log trawling now.


A Quick Peek Under the Hood

kube-bench has an easily, readable codebase . This makes it quite helpful in understanding how it decides what to check and why.

Here’s a breakdown of what it does behind the scenes.

Where the Real Work Happens

Inside the repo are a few interesting finds:

PathWhat It Contains
cfg/All benchmark definitions (by version)
cfg/config.yamlThe mapping of K8s version to benchmark
cfg/cis-xYAML files like master.yaml, node.yaml for control plane, workers, etc.
cmd/kubernetes_version.goAuto-detection of K8s version

Deciding What to Check

  1. Detect your Kubernetes version

    • Queries the API server
  2. Map to a CIS benchmark directory

    • Example: v1.31.x maps to cfg/cis-1.10/
  3. Loads target roles

    • --targets master,node,etcd → loads master.yaml, node.yaml, etcd.yaml
    • These yaml files define the checks, descriptions, file paths, expected flags, and evaluation logic

Want to Force a Version?

There are quite a few flags. One of the more interesting ones is --benchmark. If you want to run a specific version (like cis-1.7) regardless of what kube-bench detects:

kube-bench --benchmark cis-1.7

This could be useful if:

  • Your cluster version is newer than the latest supported benchmark
  • You want consistent output across mixed-version clusters

I chose 1.7 randomly, don't ask.

Practical Walkthrough on a 1.31 Cluster

There’s more going on under the hood, but these high-level steps summarize how kube-bench figures out what to run on a 1.31 Kubernetes cluster:

Onwards and upwards.


The Benchmarks kube-bench Does (and Doesn’t) Cover

Now that we’ve looked at how kube-bench runs and what powers it under the hood, let’s step back and examine what it actually checks for, and what it doesn’t even try to.

Benchmarks kube-bench Does Support

kube-bench is focused entirely on CIS Kubernetes Benchmarks.

As of now, kube-bench includes benchmark definitions for:

  • Plain Kubernetes standards cis-1.5.1 through cis-1.10, which cover Kubernetes from 1.15 through 1.31
  • Public Cloud Kubernetes standards (GKE, EKS, AKS)
  • Others (k3s, Rancher, Openshift, Tanzu)

Each benchmark version lives in a versioned folder containing YAML rules scoped by node role as you've seen.

What the CIS Benchmarks Actually Check

The CIS Kubernetes Benchmarks generally focus on low-level, host and Kubernetes centric security configurations that reduce risk in a Kubernetes cluster. That includes checks like:

  • Permissions and ownership of sensitive files (e.g. kubelet.conf, etcd.conf)
  • Use of secure API server flags (--anonymous-auth=false, --audit-log-path)
  • Disabling insecure features (basic auth, profiling, always-allow admission)
  • Proper certificate setup and kubelet authentication

It’s all about making sure your cluster has the basics right.

What kube-bench Doesn’t Cover

Despite being useful for CIS compliance, kube-bench doesn’t aim to be a comprehensive posture scanner. It leaves out a lot of security domains you might care about in production.

Not covered:

  • PodSecurity Standards (restricted / baseline / privileged)
  • NSA / CISA Kubernetes Hardening Guidelines
  • OWASP Kubernetes Top Ten
  • Generic Standards (e.g. NIST CSF, SOC2, PCI)
  • Custom misconfigurations

If your goal is full-cluster posture analysis, CIS benchmarks are just one piece of the puzzle. kube-bench gives you that piece. No more, no less.


When kube-bench Actually Makes Sense

It should be obvious that kube-bench isn’t a platform. It’s not going to save your cluster. But it can be useful if you know exactly what you're getting.

When It’s Helpful

  • Baseline posture snapshot during cluster setup
  • Compliance checkboxing
  • “We did something!”

When It’s Not Enough

  • You want continuous posture drift detection
  • You care about non-CIS policies
  • You need real remediation or GitOps workflows
  • You’re securing cloud-managed services (IAM, network policies, etc.)

kube-bench gives you a quick gut check. It's not a posture platform, and it won’t replace a full KSPM tool, but it is good enough to show you tried.


Bonus: The DaemonSet Hack

kube-bench doesn’t ship as a DaemonSet. But you can simulate one by creating a CronJob per node using some bash and node labels.

Here’s the general idea:

  1. Label each node you want to scan:
kubectl label node <node-name> kube-bench=true
  1. Loop over each labeled node and create the earlier CronJob scoped to it:
for node in $(kubectl get nodes -l kube-bench=true -o jsonpath='{.items[*].metadata.name}'); do
  kubectl apply -f <(sed "s/NODE_NAME_PLACEHOLDER/$node/" kube-bench-cron.yaml)
done
  1. Your cron job template should be updated like this:
apiVersion: batch/v1
kind: CronJob
metadata:
  name: kube-bench-NODE_NAME_PLACEHOLDER
spec:
  schedule: "0 3 * * *"
  jobTemplate:
    spec:
      template:
        spec:
          nodeSelector:
            kubernetes.io/hostname: NODE_NAME_PLACEHOLDER
          restartPolicy: OnFailure
          containers:
            - name: kube-bench
              image: aquasec/kube-bench:latest
              env:
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      fieldPath: spec.nodeName
              command: ["/bin/sh", "-c"]
              args:
                - "kube-bench --json > /output/${NODE_NAME}.json"
              volumeMounts:
                - name: var-lib-cni
                  mountPath: /var/lib/cni
                  readOnly: true
                - mountPath: /var/lib/etcd
                  name: var-lib-etcd
                  readOnly: true
                - mountPath: /var/lib/kubelet
                  name: var-lib-kubelet
                  readOnly: true
                - mountPath: /var/lib/kube-scheduler
                  name: var-lib-kube-scheduler
                  readOnly: true
                - mountPath: /var/lib/kube-controller-manager
                  name: var-lib-kube-controller-manager
                  readOnly: true
                - mountPath: /etc/systemd
                  name: etc-systemd
                  readOnly: true
                - mountPath: /lib/systemd/
                  name: lib-systemd
                  readOnly: true
                - mountPath: /srv/kubernetes/
                  name: srv-kubernetes
                  readOnly: true
                - mountPath: /etc/kubernetes
                  name: etc-kubernetes
                  readOnly: true
                - mountPath: /usr/local/mount-from-host/bin
                  name: usr-bin
                  readOnly: true
                - mountPath: /etc/cni/net.d/
                  name: etc-cni-netd
                  readOnly: true
                - mountPath: /opt/cni/bin/
                  name: opt-cni-bin
                  readOnly: true
                - name: results
                  mountPath: /output
          hostPID: true
          restartPolicy: Never
          volumes:
            - name: var-lib-cni
              hostPath:
                path: /var/lib/cni
            - hostPath:
                path: /var/lib/etcd
              name: var-lib-etcd
            - hostPath:
                path: /var/lib/kubelet
              name: var-lib-kubelet
            - hostPath:
                path: /var/lib/kube-scheduler
              name: var-lib-kube-scheduler
            - hostPath:
                path: /var/lib/kube-controller-manager
              name: var-lib-kube-controller-manager
            - hostPath:
                path: /etc/systemd
              name: etc-systemd
            - hostPath:
                path: /lib/systemd
              name: lib-systemd
            - hostPath:
                path: /srv/kubernetes
              name: srv-kubernetes
            - hostPath:
                path: /etc/kubernetes
              name: etc-kubernetes
            - hostPath:
                path: /usr/bin
              name: usr-bin
            - hostPath:
                path: /etc/cni/net.d/
              name: etc-cni-netd
            - hostPath:
                path: /opt/cni/bin/
              name: opt-cni-bin
            - hostPath:
                path: /var/kube-bench-results
                type: DirectoryOrCreate
              name: results

This hack gets you kube-bench on all nodes, run on a schedule, and with results output cleanly to the host. No operator required. Could be worse.


Wrap-Up

If you're looking for a lightweight, no-frills way to check your Kubernetes cluster against the CIS benchmark, kube-bench still holds up. It is not a swiss army knife, which actually makes it easy to implement.

No dashboards. No complex setup. No tool doing too much. Just a CLI tool (and maybe a CronJob) that says, “Hey, maybe don’t expose your control plane to the world.”

Use it to baseline a new cluster. Add it to a cluster that has never had a security tool. Use it to close that compliance ticket. Just don’t expect it to be your posture source of truth.

1
Subscribe to my newsletter

Read articles from Matt Brown directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Matt Brown
Matt Brown

Working as a solutions architect while going deep on Kubernetes security — prevention-first thinking, open source tooling, and a daily rabbit hole of hands-on learning. I make the mistakes, then figure out how to fix them (eventually).