Kubernetes Learning Week Series 19


Implementing Dedicated Node Group Isolation with Kyverno on Kubernetes
Whatβs kyverno
Kyverno is a Kubernetes-native policy engine designed to simplify and secure configuration governance by mutating, validating, and generating resources through declarative policies written in YAML.
In this blog, we'll explore how to implement dedicated node group isolation in Kubernetes using Kyverno, starting with basic usage for namespace-level isolation, and moving into advanced scenarios involving multiple node groups in the same namespace.
π± Basic Usage: Namespace-Level Node Group Isolation
This use case is about ensuring that all pods in a namespace (e.g., pre-prod
) are scheduled only on a dedicated node group that shares a common taint and label.
π οΈ Prerequisites
- Kyverno installed via Helm:
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm install kyverno kyverno/kyverno --namespace kyverno --create-namespace
Create dedicated node group with below information:
Taint:
dedicated-asg-4-ns=pre-prod:NoSchedule
Label:
dedicated-asg-4-ns=pre-prod
Create namespace
kubectl create namespace pre-prod
π Kyverno Policy (Namespace-Level)
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-preprod-dedicated-nodes
spec:
background: false
validationFailureAction: Audit
rules:
- name: inject-preprod-affinity-toleration
match:
any:
- resources:
kinds: [Pod]
namespaces: [pre-prod]
mutate:
patchStrategicMerge:
spec:
tolerations:
- key: dedicated-asg-4-ns
operator: Equal
value: pre-prod
effect: NoSchedule
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: dedicated-asg-4-ns
operator: In
values:
- pre-prod
β Result
All pods in pre-prod
will be auto-mutated with the required toleration and node affinity, ensuring strict placement on the dedicated node group. But before you apply the policy, you need to take care the below operations first, otherwise you will get errors.
Extend Kyverno Background Controller Permissions
Kyverno's background controller by default cannot mutate existing Pod
resources.
β Add Custom ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: kyverno:background-controller-pods
labels:
rbac.kyverno.io/aggregate-to-background-controller: "true"
rules:
- apiGroups: [""]
resources:
- pods
verbs:
- get
- list
- watch
- patch
- update
This role will automatically aggregate into the Kyverno background controller via label selectors.
π Verify the Final ClusterRole
kubectl get clusterrole kyverno:background-controller -oyaml
Ensure that the rules:
section includes permissions for:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- list
- watch
- patch
- update
π Advanced Usage: Mixed Node Groups in the Same Namespace
For more flexibility, you may want some workloads in a namespace (e.g., pre-prod
) to run on different node groups (e.g., m7a
for CPU-intensive pods, and r7a
for memory-heavy pods).
π§ Strategy
Apply the namespace-wide policy but exclude specific pods or deployments via labels.
Create custom policies for selected deployments (e.g.,
fp-app
) to route them to special node groups.
π Example Setup
fp-app
should run on node group with:Taints:
dedicated-node-type=mtype:NoSchedule
,dedicated-asg-4-ns=pre-prod:NoSchedule
Labels:
dedicated-node-type=mtype
,dedicated-asg-4-ns=pre-prod
π Namespace-Wide Policy with Exclusions
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-preprod-dedicated-nodes
spec:
background: false
validationFailureAction: Audit
rules:
- name: inject-preprod-affinity-toleration
match:
any:
- resources:
kinds: [Pod]
namespaces: [pre-prod]
exclude:
any:
- resources:
kinds: [Pod]
namespaces: [pre-prod]
selector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- fp-app
- fp-async
- fp-cron
- dcluster
- sdg-apiserver
mutate:
patchStrategicMerge:
spec:
tolerations:
- key: dedicated-asg-4-ns
operator: Equal
value: pre-prod
effect: NoSchedule
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: dedicated-asg-4-ns
operator: In
values:
- pre-prod
π― Specific Policy for mType Apps
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-preprod-toleration-only-fp-app
spec:
validationFailureAction: Enforce
background: true
rules:
- name: inject-toleration-for-fp-app
match:
any:
- resources:
kinds: [Deployment]
namespaces: [pre-prod]
names: [fp-app]
mutate:
patchesJson6902: |-
- op: add
path: /spec/template/spec/tolerations/-
value:
key: dedicated-asg-4-ns
operator: Equal
value: pre-prod
effect: NoSchedule
β Result
General pods in
pre-prod
follow namespace policy and land on regularr7a
node group.fp-app
** and selected others** are steered to themtype
-based node group with custom policy.
π Validation Tips
Use
kubectl get pod -n pre-prod -owide
to verify node placement.Inspect final pod spec with:
kubectl get pod <pod-name> -n pre-prod -o yaml
- Ensure your nodes have both taints correctly set via:
kubectl describe node <node-name>
π Conclusion
With Kyverno, Kubernetes administrators can declaratively enforce intelligent node group scheduling at both namespace and workload level. This layered approach ensures:
Clean separation of workloads by hardware profiles
Granular control of resource scheduling
Easy governance with no need for custom webhooks or admission controllers
For production setups on EKS, this is a powerful pattern to align infrastructure and workload isolation.
Subscribe to my newsletter
Read articles from Nan Song directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
