Kyverno 1.15: New Policy Types, CEL-Powered


Introduction
Kyverno's v1.15.0-alpha
release brings a major update thatโs going to make writing policies much easier and more intuitive. In this post, Iโll give you a quick overview of each new policy type and what makes them better than the previous all-in-one ClusterPolicy
.
Why This Matters
ClusterPolicy
has been great. Itโs powerful, well-structured, and supports advanced features like preconditions
, context
, and API calls. But it relies on JMESPath, which can be tricky to write and debug, especially when you start nesting logic or dealing with strict syntax (yes, those parentheses ๐
).
With the new policy types in 1.15, Kyverno introduces separate CRDs for each type of policy, and the biggest improvement is that theyโre fully CEL-powered. That means:
Everything match conditions, validations, mutations can now be written in CEL
You get simpler syntax and better error handling
Easier debugging and lighter policy definitions
Structure is closer to native Kubernetes
ValidatingAdmissionPolicy
(VAP) andMutatingAdmissionPolicy
(MAP), but more feature-rich and easier to adopt
What's New?
Kyverno now supports the following distinct policy types:
ValidatingPolicy
MutatingPolicy
ImageValidatingPolicy
GeneratingPolicy
DeletingPolicy
PolicyException
Each policy type has its own controller and CRD, following the principle of separation of concerns. This not only improves maintainability but also gives better control and clarity over how each type behaves.
And donโt worry itโs a smooth upgrade. All existing features like autogen
, evaluation mode
, YAML/JSON
handling GlobalContext
,PolicyException
, PolicyReports
, Autogen
(policy generation for controllers) and CLI
support are supported in these Policy types too.
Up Next
In the rest of this blog, Iโll walk you through each policy type with a short description and what makes it useful. You can also try them live on the Kyverno Playground.
You can explore more capabilities and expressions here:
๐ https://kyverno.io/docs/policy-types/validating-policy/#kyverno-cel-libraries
๐งฉ Want to explore the full API structure for the new policies?
Hereโs the complete CRD reference easy to browse and understand:
๐ Kyverno CRD API Viewer
Letโs dive in ๐
ValidatingPolicy
This policy validates resources at admission time or in background mode using the ValidatingPolicy
kind.
In this example, the policy checks for Pods with the label prod=true
and ensures that privilege escalation is explicitly disallowed for all containers.
๐ Learn more about ValidatingPolicy here:
๐ https://kyverno.io/docs/policy-types/validating-policy/
apiVersion: policies.kyverno.io/v1alpha1
kind: ValidatingPolicy
metadata:
name: disallow-privilege-escalation
spec:
autogen:
podControllers:
controllers:
- deployments
- cronjobs
validatingAdmissionPolicy:
enabled: true
validationActions:
- Audit
matchConstraints:
resourceRules:
- apiGroups: [""]
apiVersions: [v1]
operations: [CREATE, UPDATE]
resources: ["pods"]
matchConditions:
- name: "check-prod-label"
expression: >-
has(object.metadata.labels) && has(object.metadata.labels.prod) && object.metadata.labels.prod == 'true'
validations:
- expression: >-
object.spec.containers.all(container, has(container.securityContext) &&
has(container.securityContext.allowPrivilegeEscalation) &&
container.securityContext.allowPrivilegeEscalation == false)
message: >-
Privilege escalation is disallowed. The field
spec.containers[*].securityContext.allowPrivilegeEscalation must be set to `false`.
As you can see, itโs simpler than ClusterPolicy
and easier to work with than Kubernetes VAP no need to manage bindings manually, since Kyverno handles that for you. You can focus entirely on writing great policies.
MutatingPolicy
This MutatingPolicy
ensures that Pods created in the autogen-applyconfiguration
namespace have the allowPrivilegeEscalation
field set to false
for all containers.
It uses the ApplyConfiguration
patch type to declaratively set the security context without overriding unrelated fields.
Kyverno automatically handles controller-specific autogeneration in this case, it's enabled for Deployments
.
This policy helps enforce container hardening in a specific namespace using a clean, declarative approach.
๐ Learn more about MutatingPolicy here:
๐ Docs soon
apiVersion: policies.kyverno.io/v1alpha1
kind: MutatingPolicy
metadata:
name: test-mpol-applyconfiguration-autogen
spec:
failurePolicy: Fail
autogen:
podControllers:
controllers:
- deployments
matchConstraints:
resourceRules:
- apiGroups: [ "" ]
apiVersions: [ "v1" ]
operations: [ "CREATE" ]
resources: [ "pods" ]
matchConditions:
- name: is-applyconfiguration-namespace
expression: object.metadata.namespace == 'autogen-applyconfiguration'
mutations:
- patchType: ApplyConfiguration
applyConfiguration:
expression: >
Object{
spec: Object.spec{
containers: object.spec.containers.map(container, Object.spec.containers{
name: container.name,
securityContext: Object.spec.containers.securityContext{
allowPrivilegeEscalation: false
}
})
}
}
GeneratingPolicy
This GeneratingPolicy
automatically creates a ConfigMap
named zk-kafka-address
whenever a Namespace is created or updated.
The generated ConfigMap
includes predefined KAFKA_ADDRESS
and ZK_ADDRESS
entries and is placed in the same namespace as the triggering resource. It uses CEL expressions to dynamically capture the namespace name and generate the resource.
This is useful for injecting cluster-wide service connection details (like ZooKeeper and Kafka endpoints) into every new namespace automatically.
apiVersion: policies.kyverno.io/v1alpha1
kind: GeneratingPolicy
metadata:
name: zk-kafka-address
spec:
matchConstraints:
resourceRules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE", "UPDATE"]
resources: ["namespaces"]
variables:
- name: nsName
expression: "object.metadata.name"
- name: configmap
expression: >-
[
{
"kind": dyn("ConfigMap"),
"apiVersion": dyn("v1"),
"metadata": dyn({
"name": "zk-kafka-address",
"namespace": string(variables.nsName),
}),
"data": dyn({
"KAFKA_ADDRESS": "192.168.10.13:9092,192.168.10.14:9092,192.168.10.15:9092",
"ZK_ADDRESS": "192.168.10.10:2181,192.168.10.11:2181,192.168.10.12:2181"
})
}
]
generate:
- expression: generator.Apply(variables.nsName, variables.configmap)
This GeneratingPolicy
triggers on ConfigMap deletion and creates a Secret
in the same namespace.
If you want to clone a resource, this is a simple example: it uses resource.Get
( kyverno cel lib) to fetch the original Secret
, and generator.Apply
to create it in the target namespace making the cloning process easy and clean.
In this case, it clones a Secret
named clone-generate-on-trigger-deletion
from the default
namespace into the namespace where the deletion occurred.
๐ Learn more about GeneratingPolicy here:
๐ Docs soon
apiVersion: policies.kyverno.io/v1alpha1
kind: GeneratingPolicy
metadata:
name: generate-networkpolicy
spec:
matchConstraints:
resourceRules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["DELETE"]
resources: ["configmaps"]
variables:
- name: nsName
expression: "namespaceObject.metadata.name"
- name: source
expression: resource.Get("v1", "secrets", "default", "clone-generate-on-trigger-deletion")
generate:
- expression: generator.Apply(variables.nsName, [variables.source])
DeletingPolicy
This DeletingPolicy
runs every minute (*/1 * * * *
) and targets all Pods.
It checks if all containers in the Pod use images built for amd64
architecture, using image.GetMetadata
(kyverno CEL Lib). If the condition is met, the Pod is deleted.
This is useful for enforcing cleanup or lifecycle rules based on image metadata for example, removing Pods using specific architectures or outdated images on a schedule.
๐ Learn more about DeletingPolicy here:
๐ Docs soon
apiVersion: policies.kyverno.io/v1alpha1
kind: DeletingPolicy
metadata:
name: image-date-delete
spec:
matchConstraints:
resourceRules:
- apiGroups: [""]
apiVersions: ["v1"]
resources: ["pods"]
schedule: "*/1 * * * *"
conditions:
- name: arch-check
expression: >
object.spec.containers.all(c, image.GetMetadata(c.image).config.architecture == "amd64")
ImageValidatingPolicy
This policy is a security supply chain saviorit helps enforce image verification using tools like Notary, Cosign, attestations, SBOMs, and more.
The example below demonstrates how to verify container images are signed with Cosign using a public key. It specifically matches images from docker.io/kyverno/kyverno*
and denies the creation or update of any Pod if the signature verification fails.
This helps ensure that only trusted, signed images are allowed into your cluster.
๐ Learn more about ImageValidatingPolicy here:
๐ https://kyverno.io/docs/policy-types/image-validating-policy/
apiVersion: policies.kyverno.io/v1alpha1
kind: ImageValidatingPolicy
metadata:
name: verify-image-ivpol
spec:
webhookConfiguration:
timeoutSeconds: 15
evaluation:
background:
enabled: false
validationActions: [Deny]
matchConstraints:
resourceRules:
- apiGroups: [""]
apiVersions: ["v1"]
operations: ["CREATE", "UPDATE"]
resources: ["pods"]
matchImageReferences:
- glob : "docker.io/kyverno/kyverno*"
attestors:
- name: cosign
cosign:
key:
data: |
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE6QsNef3SKYhJVYSVj+ZfbPwJd0pv
DLYNHXITZkhIzfE+apcxDjCCkDPcJ3A3zvhPATYOIsCxYPch7Q2JdJLsDQ==
-----END PUBLIC KEY-----
validations:
- expression: >-
images.containers.map(image, verifyImageSignatures(image, [attestors.cosign])).all(e ,e > 0)
message: >-
failed the image Signature verification
PolicyException with CEL Expression
This is useful when you want to exclude certain resources or namespaces from policy enforcement.
With the latest changes, PolicyException
now supports CEL expressions, making it more powerful and flexible.
The example below shows how to skip policy enforcement for any resource in the test-ns
namespace:
apiVersion: policies.kyverno.io/v1alpha1
kind: PolicyException
metadata:
name: pod-security-exception
spec:
matchConditions:
- name: "check-namespace"
expression: "object.metadata.namespace == 'test-ns'"
๐ Learn more about PolicyExceptions with CEL:
๐ https://main.kyverno.io/docs/exceptions/#policyexceptions-with-cel-expressions
Use this to fine-tune exceptions without disabling policies cluster-wide.
Coming Soon
Thereโs also a namespaced version of these policies coming soon. This allows teams to safely test and experiment without affecting the entire cluster. It helps prevent mismanagement and misconfiguration by scoping policies to specific namespaces making it easier to delegate control while maintaining stability.
Subscribe to my newsletter
Read articles from Mohd Kamaal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Mohd Kamaal
Mohd Kamaal
Open source enthusiast | Blogger