Control Issues: Real Policies in minutes with Kyverno


Kubernetes doesn’t exactly have a shortage of admission control options, but let’s be real, the list isn’t long, and most of it feels like a trade-off.
There’s Gatekeeper, which is powerful and tightly integrated with Open Policy Agent, but it speaks Rego, a language most folks don’t actually want to learn... unless they’re being paid to.
There’s Kubewarden, which lets you write policies in languages like Rust or Go, assuming you want to compile and manage WASM modules as part of your security workflow.
And then there’s jsPolicy, which brings in JavaScript-based policies. It’s clever, and probably worth a later look, but not exactly mainstream.
Finally, we have Kyverno, the tool I’ll be focusing on in this post and the one I personally reach for first.
Why? Because Kyverno keeps it simple.
- It speaks YAML, not a policy language with a steep learning curve.
- It feels native to Kubernetes, not bolted on.
- It’s easy to install, easy to reason about, and unlike PSA, it actually lets you write your own rules without jumping through hoops.
So in Part 2 of this series, we’ll take a closer look at Kyverno: what it is, how it works, and why it strikes the right balance between power and usability — especially for the people who have to live with these policies day-to-day.
What Is Kyverno, Really?
Kyverno is an admission controller and policy engine purpose-built for Kubernetes. No extra DSLs, no compiled policies, just clean YAML and direct integration with Kubernetes-native resources. Simply put, quite lovely. It runs as an admission controller in your cluster — validating, mutating, or generating resources based on policies you define. Kyverno will feel familiar if you've used other tools that also generate policies or manage resources via YAML. This would include simply how we manage standard resources like Deployments, Services, etc.
Key Things Kyverno Does
- Validation: Block or allow based on conditions
- Mutation: Automatically add defaults or enforce settings
- Generation: Create companion resources when something new is deployed
- Verification: Enforce image provenance by requiring signed containers
Kyverno policies are just Kubernetes resources (Policy
or ClusterPolicy
), so you can version, audit, and deploy them using your normal GitOps pipeline or CI tooling. These CRDs take away all the heavy lifting.
It feels that Kyverno was designed for platform and security teams alike. It’s the kind of tool you can hand to a dev or a security engineer or me and actually expect them to read, understand, and modify. That sounds useful to me.
Installing Kyverno
With a basic understanding of Kyverno it is time to actually get into it. Of course this means we install it. To get started with Kyverno, we’ll install it using Helm. It’s fast, reliable, and the preferred method for most setups. The steps here will work perfectly for a kubeadm
cluster.
There might be issues with Kubernetes versions as seen in the compatibility matrix. Using Kubernetes 1.31 and Helm gave me no issues.
Add the Kyverno Helm repo:
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
Now install it into its own namespace:
helm install kyverno kyverno/kyverno -n kyverno --create-namespace
That’s it, the admission controller and all other resources will spin up automatically!
You can verify everything’s running:
kubectl get po -n kyverno
And you should see something like this (probably less time running):
NAME READY STATUS RESTARTS AGE
kyverno-admission-controller-7c69f7b978-cj7rr 1/1 Running 0 5m8s
kyverno-background-controller-5cd77899f6-svnr4 1/1 Running 0 6m20s
kyverno-cleanup-controller-6796bbffff-q77m4 1/1 Running 0 6m20s
kyverno-reports-controller-75c89fdb99-7gqj4 1/1 Running 0 6m23s
Each pod is the single target for a deployment, thus there will be four deployments each with 1 replica.
Once that’s done, Kyverno is ready to start evaluating policies. No CRDs to apply manually, no webhooks to register, it’s all included. And we have no ds
, so this is not only easy but also lightweight.
Next up, let’s explore what has been created.
Kyverno Component Breakdown
When you install Kyverno, you’re not just getting a single controller, you’re deploying a whole suite of functionality, broken into distinct services.
Admission Controller
Pod: kyverno-admission-controller
- What it does: Core admission webhook that validates, mutates, and generates resources during admission requests (
CREATE
,UPDATE
,DELETE
). - Why it matters: This is where policies are applied in real time as objects enter the cluster.
- Watch out: If it’s not ready, your workloads might get blocked. And if your policies are a hot mess, well...
Background Controller (optional)
Pod: kyverno-background-controller
- What it does: Evaluates existing cluster resources against policies on a schedule.
- Why it matters: This gives Kyverno the ability to enforce compliance retroactively, not just on new resources.
Cleanup Controller (optional)
Pod: kyverno-cleanup-controller
- What it does: Executes
cleanup
policies to remove stale or expired resources like Jobs or ConfigMaps. - Why it matters: Great for enforcing TTL, auto-cleaning resources after use, or lifecycle management.
Reports Controller (optional)
Pod: kyverno-reports-controller
- What it does: Generates
PolicyReport
andClusterPolicyReport
resources. - Why it matters: These reports feed into dashboards,
kubectl
, and alerting systems so you can track policy violations.
Services
Service Name | Purpose | Port(s) |
kyverno-svc | Admission webhook target | 443 |
kyverno-svc-metrics | Prometheus metrics for webhook | 8000 |
*-controller-metrics | Metrics endpoints for all controllers | 8000 |
kyverno-cleanup-controller | Internal access to cleanup features | 443 |
CRDs
And of course you get a load of Custom Resource Definitions (CRDs). You can see them by just listing and grepping for kyverno. These CRDs provide the value of making policies easy to write and manage.
matt@controlplane:~$ kubectl get crd | grep kyverno
cleanuppolicies.kyverno.io 2025-05-21T18:53:21Z
clustercleanuppolicies.kyverno.io 2025-05-21T18:53:21Z
clusterephemeralreports.reports.kyverno.io 2025-05-21T18:53:21Z
clusterpolicies.kyverno.io 2025-05-21T18:53:21Z
ephemeralreports.reports.kyverno.io 2025-05-21T18:53:21Z
globalcontextentries.kyverno.io 2025-05-21T18:53:21Z
imagevalidatingpolicies.policies.kyverno.io 2025-05-21T18:53:21Z
policies.kyverno.io 2025-05-21T18:53:21Z
policyexceptions.kyverno.io 2025-05-21T18:53:21Z
policyexceptions.policies.kyverno.io 2025-05-21T18:53:21Z
updaterequests.kyverno.io 2025-05-21T18:53:21Z
validatingpolicies.policies.kyverno.io 2025-05-21T18:53:21Z
Summary Table
Component | Functionality |
Admission Controller | Admission webhook — validate, mutate, generate |
Background Controller | Scans existing resources for compliance |
Cleanup Controller | Deletes stale resources |
Reports Controller | Creates PolicyReports for visibility |
Metrics Services | Prometheus integration for observability |
Now that you know what Kyverno gives you, you can troubleshoot smarter and decide which features you want to rely on!
What Kyverno Actually Does
Kyverno isn’t just an admission controller, it’s a full-blown policy engine that hooks into Kubernetes in a way that actually feels native. Let’s dig into the four core things it can do (This might be a bit remedial, but worth a look):
Validation
You can define rules that block resources based on specific conditions. For example:
- Block any Pod that uses
hostPath
volumes. - Require all containers to drop Linux capabilities like
NET_ADMIN
. - Enforce that Pods must include a
team
label with a valid value.
Validation policies are great for catching risky configs before they hit your cluster. They run during admission and reject non-compliant resources outright. Always there for your trusty non root true.
Mutation
Mutation policies modify resources before they’re admitted. This is ideal for setting sane defaults or cleaning up sloppiness:
- Automatically inject a
seccompProfile: RuntimeDefault
if it’s missing. - Add namespace-wide labels like
owner: platform-team
. - Force image pull policy to
IfNotPresent
.
These kinds of policies might help maintain consistency of certain resources or labeling without constant manual review.
Generation
Want to auto-provision supporting resources? Generation policies make that possible:
- Automatically create a
NetworkPolicy
every time a newNamespace
is created. - Add default
RoleBinding
objects for dev teams when new environments spin up. - Generate a
LimitRange
orResourceQuota
in each namespace for basic resource governance.
Generation saves time and ensures standard scaffolding is always in place.
Verification
This is Kyverno’s take on supply chain security: block or allow container images based on cryptographic signatures.
- Require that all images in a
prod
namespace are signed. - Block unsigned images entirely in high-trust workloads.
This feature integrates with Cosign and Notary and can be used for real enforcement. This is a really cool feature. Definitely not something I initially expected.
Each of these categories is pretty cool. Now onto actual policies.
Kyverno Policy Examples (with ClusterPolicy
)
Kyverno policies live in one of two forms: Policy
(namespace-scoped) or ClusterPolicy
(cluster-scoped). There is zero difference in the formatting, it is just kind: ClusterPolicy
or kind: Policy
. For our purposes we'll assume we want these to be more centralized and less team specific. Using policies like no privileged containers, enforced image signing, auto-injected defaults, ClusterPolicy
is your go-to.
Let's dive into different Kyverno policies covering the gambit of types.
With each of these after you create the policy and save it you just need to run the usual kubectl create -f policy.yaml
.
1. Validation Policy: Block hostPath
Volumes
Use case: Prevent users from mounting sensitive host paths.
- Highlights
match
andvalidate
blocks - Example:
spec.validate.pattern
to reject any Pod with ahostPath
- Block, not audit via
validationFailureAction: Enforce
- Show how Kyverno blocks the Pod with a clear admission message
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: block-hostpath
spec:
validationFailureAction: Enforce
rules:
- name: disallow-hostpath
match:
resources:
kinds:
- Pod
selector:
matchLabels:
app: kyverno-demo
validate:
message: "hostPath volumes are not allowed."
pattern:
spec:
volumes:
- =(hostPath): "absent"
The added selector keeps it so we don't worry about affecting every workload. Gets annoying when you forget to do this and the policy is hanging around. We can test it with the following pod spec:
apiVersion: v1
kind: Pod
metadata:
name: bad-pod
spec:
containers:
- name: nginx
image: nginx
volumes:
- name: host
hostPath:
path: /etc
Upon applying you should see the following:
matt@controlplane:~$ kubectl apply -f bad-pod-hostpath.yaml
Error from server: error when creating "bad-pod-hostpath.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:
resource Pod/default/bad-pod was blocked due to the following policies
block-hostpath:
disallow-hostpath: 'validation error: hostPath volumes are not allowed. rule disallow-hostpath
failed at path /spec/volumes/0/hostPath/'
Cool.
2. Mutation Policy: Inject a seccompProfile
Use case: Ensure all Pods default to RuntimeDefault
if not explicitly set.
- Shows how Kyverno can inject security settings automatically, no PR required
- A hands-off way to harden runtime configs using
mutate
logic and smart defaults
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-seccomp
spec:
validationFailureAction: Audit
rules:
- name: default-seccomp
match:
resources:
kinds:
- Pod
selector:
matchLabels:
app: kyverno-demo
mutate:
patchStrategicMerge:
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
We'll use the same nginx pod as the first example, which does not have seccompProfile
. After applying the policy and pod we should see the pod updated.
matt@controlplane:~$ kubectl get pod nginx-test -o yaml | grep -A2 seccomp
seccompProfile:
type: RuntimeDefault
serviceAccount: default
3. Generation Policy: Create NetworkPolicy
per Namespace
Use case: Enforce network isolation by default when a Namespace is created.
- Demonstrates the
generate
block andsynchronize: true
behavior for keeping resources in sync - Automatically applies a default-deny
NetworkPolicy
to new namespaces, effective only if your CNI plugin supports enforcement
This works fine in my cluster with Calico. If you're not using Calico or Cilium, double-check that your CNI actually enforces NetworkPolicy.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-network-policy
spec:
rules:
- name: default-deny
match:
resources:
kinds:
- Namespace
name: secure-ns
generate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny
namespace: "{{request.object.metadata.name}}"
synchronize: true
data:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Once we apply we should see our new namespace
with the policy:
matt@controlplane:~$ kubectl get networkpolicy -n secure-ns
NAME POD-SELECTOR AGE
default-deny <none> 28s
4. Verification Policy: Require Signed Images with Cosign
Use case: Block unsigned images unless they’re verified by a public key.
- Shows
verifyImages
block - Can tie to GitHub Actions workflows, Cosign keys, etc.
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-signed-images
spec:
validationFailureAction: Enforce
rules:
- name: verify-prod-images
match:
resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "ghcr.io/my-org/*"
attestors:
- entries:
- keyless:
subject: "my-gh-oidc-issuer@example.com"
issuer: "https://token.actions.githubusercontent.com"
I'll leave at the yaml for this one. But I am guessing this will work.
Under the Hood: How Kyverno Validation Actually Works
So what happens when you try to deploy a Pod that violates a ClusterPolicy
? Let’s break it down. Yes, another numbered list, but stay with me the loyal reader.
1. Admission Request Hits the API Server
When you run:
kubectl apply -f pod-with-hostpath.yaml
The API server checks if any validating webhooks are registered. Since Kyverno installs a webhook, the API server pauses and sends the Pod spec to Kyverno for review.
2. Kyverno Matches the Policy
Kyverno checks its cache of active ClusterPolicy
and Policy
resources.
For each validation rule, it asks:
- Does this rule apply to a
Pod
? - Does the
match
block match the labels, namespace, etc.? - If so, does the
validate
pattern match the resource?
If not, it returns an error message and rejects the request.
3. API Server Blocks the Resource
If validationFailureAction: Enforce
is set, and the pattern fails:
{
"allowed": false,
"status": {
"message": "hostPath volumes are not allowed."
}
}
The Pod never makes it to the scheduler. Way simpler than writing your own webhook.
Testing Policies Locally with kyverno apply
Before enforcing a policy in your cluster, it's a good idea to test it locally, just like you'd run terraform plan
or kustomize build
.
The Kyverno CLI lets you dry-run a policy against Kubernetes resources to see if they'd pass or be blocked. It's perfect for quick feedback loops and can easily be added to CI pipelines for pre-merge checks.
Install Kyverno CLI (macOS)
If you're on macOS and using Homebrew:
brew install kyverno
Then verify the install:
kyverno version
You should see something like this:
kyverno version
Version: 1.14.4
Time: ---
Git commit ID: ---
Run a Dry-Run Test
Here’s how to test your block-hostpath.yaml
policy against a sample Pod spec.
kyverno apply block-hostpath.yaml -r pod-with-hostpath.yaml
You see:
Applying policy block-hostpath.yaml to resource pod-with-hostpath.yaml...
rule disallow-hostpath[validation] failed. Resource pod-with-hostpath.yaml was blocked.
hostPath volumes are not allowed.
This confirms the policy works as expected — and you can catch misconfigurations or rule logic issues before they hit the cluster. Could add it to a GitHub Action as a smooth way to validate before sending it to the cluster.
Wrap Up
You’ve now seen what Kyverno can actually do and how to use it:
- Block risky configs like
hostPath
with validation - Inject runtime defaults like
seccompProfile
automatically - Auto-generate companion resources like
NetworkPolicy
- Enforce signed container images (or at least write the policy)
- Test your policies locally with the CLI
Kyverno is a breeze to get started with. Installing it via Helm and writing your first validation policy can take just a few minutes. But setup is only the beginning — we haven’t yet touched on what happens once it’s running in a real environment. Up next, we’ll dive into monitoring, reporting, and ways to maximize Kyverno’s usefulness in day-to-day operations. We’ll also explore how it can amplify other security tools in your stack. Onwards and upwards.
Subscribe to my newsletter
Read articles from Matt Brown directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Matt Brown
Matt Brown
Working as a solutions architect while going deep on Kubernetes security — prevention-first thinking, open source tooling, and a daily rabbit hole of hands-on learning. I make the mistakes, then figure out how to fix them (eventually).