Kubernetes Runtime Enforcement with KubeArmor


In the last post, we rolled up our sleeves and built a minimal AppArmor profile from scratch — one script, one path, one rule at a time. We also applied it inside Kubernetes to block a pod from writing to /tmp
. It worked well — but it’s not exactly scalable. Writing policies in this fashion is like slow-smoking one perfect brisket (which I am already not that great at) — impressive, but not how you feed a stadium. KubeArmor gives you fine-grained control, but negates the intricate requirements of manual AppArmor profile creation.
In this post, we’ll try out KubeArmor and see how it makes AppArmor practical in Kubernetes. A big takeaway will be that it is silly to do this like we did in Part 1.
What Is KubeArmor?
KubeArmor is an open source runtime security enforcement system that hooks directly into Linux Security Modules like AppArmor and SELinux. It tries to bring LSM-style enforcement into the fast-paced world of Kubernetes. It’s the assembly line of runtime protection.
Originally developed by AccuKnox, KubeArmor is now a CNCF Sandbox project with an active open source community on GitHub. KubeArmor alleviates the pain of managing LSMs in cloud-native environments:
- No more manually writing and loading LSM profiles
- No more belabored pod security context or node-by-node patching
- And full runtime visibility — with logs you can actually use
KubeArmor runs as a DaemonSet to handle LSM interactions, with a controller/operator model for managing policies, and (optionally) a relay for centralizing logs and outputs. Policies are written in a Kubernetes-native CRD format — you define what’s allowed or denied, and KubeArmor enforces it across your cluster.
You can easily deploy it using their [official Helm chart] and it’s designed to be plug-and-play for most modern Kubernetes setups.
Installing KubeArmor (The Full Stack)
We’re using the official Helm chart and deploying into a clean namespace. This will install the KubeArmor Operator, which manages the rest of the system via a custom resource. If you’ve previously installed this, you might need to clean up some CRD-related leftovers.
helm upgrade --install kubearmor-operator kubearmor/kubearmor-operator \
--namespace kubearmor --create-namespace
Once that’s installed, apply the default config to actually trigger the rollout:
kubectl apply -f https://raw.githubusercontent.com/kubearmor/KubeArmor/main/pkg/KubeArmorOperator/config/samples/sample-config.yml
That Custom Resource (a KubeArmorConfig
) will prompt the operator to install the DaemonSet, controller, and other components — all using the operator pattern.
What Just Got Installed?
You can see everything installed with:
kubectl get all -n kubearmor
Expected Output:
NAME READY STATUS RESTARTS AGE
pod/kubearmor-apparmor-containerd-98c2c-8lf56 1/1 Running 0 23h
pod/kubearmor-apparmor-containerd-98c2c-hv6xp 1/1 Running 0 23h
pod/kubearmor-controller-75d6976554-ddqph 1/1 Running 0 23h
pod/kubearmor-operator-74c5c559bd-rhqjs 1/1 Running 0 25h
pod/kubearmor-relay-5c4f88f874-prvvh 1/1 Running 0 23h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubearmor ClusterIP 10.103.102.251 <none> 32767/TCP 23h
service/kubearmor-controller-webhook-service ClusterIP 10.108.199.213 <none> 443/TCP 23h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/kubearmor-apparmor-containerd-98c2c 2 2 2 2 2 kubearmor.io/btf=yes,kubearmor.io/enforcer=apparmor,kubearmor.io/runtime=containerd,kubearmor.io/seccomp=yes,kubearmor.io/socket=run_containerd_containerd.sock,kubernetes.io/os=linux 23h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/kubearmor-controller 1/1 1 1 23h
deployment.apps/kubearmor-operator 1/1 1 1 25h
deployment.apps/kubearmor-relay 1/1 1 1 23h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kubearmor-controller-754ff668bc 0 0 0 23h
replicaset.apps/kubearmor-controller-75d6976554 1 1 1 23h
replicaset.apps/kubearmor-controller-985f9d76d 0 0 0 23h
replicaset.apps/kubearmor-operator-74c5c559bd 1 1 1 25h
replicaset.apps/kubearmor-relay-5c4f88f874 1 1 1 23h
KubeArmor spins up a few different pieces:
- kubearmor DaemonSet: Hooks into the LSM (AppArmor or SELinux) on each node and enforces policies.
- kubearmor-controller: Manages Kubernetes-native policies and attaches enforcement to pods.
- kubearmor-operator: Watches your
KubeArmorConfig
and deploys the right resources (like the controller and DaemonSet). - kubearmor-relay (optional): Centralizes log output via a raw socket stream you can connect to.
Additional Components at a Glance
A quick note on some of the other KubeArmor components (which you likely won’t need to interact with):
Services
There are a couple services deployed to support relay and controller communication:
kubearmor
: Exposes the relay server (port32767
)kubearmor-controller-webhook-service
: Handles policy admission if enabled
Roles and ClusterRoles
KubeArmor installs several roles and bindings following typical Kubernetes RBAC practices.
You can view them with:
kubectl get roles,clusterroles -n kubearmor | grep kubearmor
CRDs
KubeArmor uses its own Custom Resource Definitions (CRDs) to express policies and configurations.
View them with:
kubectl get crds | grep kubearmor
You'll see:
kubearmorpolicies.security.kubearmor.com
kubearmorhostpolicies.security.kubearmor.com
kubearmorclusterpolicies.security.kubearmor.com
kubearmorpolicies.security.kubearmor.com
These CRDs form the core of how you define what should be allowed, denied, or audited inside the cluster. In practice, it’s the kubearmorpolicies
— or ksp
— that you’ll generally work with.
Tuning KubeArmor
You can tweak some of the defaults by editing the KubeArmorConfig
object:
kubectl edit kubearmorconfig kubearmorconfig-default -n kubearmor
For example:
- Set
defaultFilePosture: block
to change file access violations fromaudit-only
toblock
. Note this gets overridden by the action in the KubeArmor policy. Enable readable logs for local testing with
enableStdOutLogs: true enableStdOutAlerts: true
Quick Tip
If you’re running the kubearmor-relay
, you can test streaming logs with port forwarding and nc.
kubectl port-forward svc/kubearmor -n kubearmor 8080:32767
nc localhost 8080
Just note: it’s not an HTTP API — don’t expect pretty JSON here. Trigger a policy violation and you'll see raw logs in real time. I didn't find a need to use it.
Inspecting KubeArmor-Generated AppArmor Profiles
Once KubeArmor is running and you've applied at least one policy, you’ll start seeing AppArmor profiles show up under /etc/apparmor.d/
. Note that KubeArmor will not be applied to existing pods by default, but only new ones.
Each profile is auto-generated and tied to a specific pod and container, using a naming convention like:
kubearmor-<namespace>-<pod>-<container>
Viewing Profiles
To list the loaded AppArmor profiles:
sudo aa-status
Example output:
53 profiles are in enforce mode.
/usr/bin/man
kubearmor-default-flask-app-flask
kubearmor-kube-system-coredns-coredns
To view the contents of a specific profile it is just as if you're viewing a normal AppArmor profile:
sudo cat /etc/apparmor.d/kubearmor-default-flask-app-flask
These profiles will reflect the rules you've written in your KubeArmor policy — file paths, capabilities, and more. They of course are no different than our previously created AppArmor policies, but clearly this is much easier.
To test this out, we’ll create a policy similar to what we did in Part 1.
A Practical KubeArmor Example
We’ll create a policy that blocks any container with the label app: tmp-writer
from writing to the /tmp
directory.
Step 1: Create a KubeArmor Profile
Using KubeArmor, let’s apply the same idea as Part 1 — blocking a pod from writing to /tmp
.
Policy:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: block-write-tmp
namespace: default
spec:
tags: ["demo", "tmp-write"] # totally optional
selector:
matchLabels:
app: tmp-writer
file:
matchDirectories:
- dir: /tmp/
recursive: true
readOnly: true
action: Block
About This Policy?
- Matches the same
/tmp
write-block behavior we used in the raw AppArmor profile. - Scoped to any pod in the
default
namespace with labelapp: tmp-writer
. - Simple, readable, and easy to test.
This process demonstrates how KubeArmor moves enforcement out of raw AppArmor files and manual Kubernetes pod specs. We can now use Kubernetes-native workflows to define security behavior once — as code — and let the platform handle the rollout. No more node-by-node updates. No more drift. Just consistent enforcement across the cluster. Nice.
What Got Created?
After applying the policy to block /tmp
writes in our tmp-writer
you can view the KubeArmorPolicy via:
kubectl get ksp
Expected Output:
NAME AGE
block-write-tmp 64s
You can subsequently describe it and you'll see that it matches what we just applied.
What The AppArmor?
After applying the policy to block /tmp
writes nothing will actually happen. It is not until we fire up pods that we will start seeing these new policies. This would be how we would grab our policy, but nothing should be returned at this point
sudo aa-status | grep tmp-writer
Step 2: Create a Pod spec
Create a pod spec under tmp-writer.yaml
. We’ve stripped down the pod spec — no required securityContext
. One less thing to worry about.
apiVersion: v1
kind: Pod
metadata:
name: tmp-writer
labels:
app: tmp-writer
spec:
containers:
- name: tmp-writer
image: busybox
command: ["/bin/sh", "-c"]
args: ["echo 'test' > /tmp/test.txt; sleep 60"]
restartPolicy: Never
Create our pod as usual.
kubectl apply -f tmp-writer.yaml
What The AppArmor?
After creating the pod we should now see the AppArmor profile. The AppArmor profile’s name and format have no direct connection to the KubeArmor Policy, but the contents of the profile will reflect what is matched on the KubeArmor Policy based on label selectors. The actual AppArmor profile for our pod would follow this convention:
kubearmor
- prefix used for all profiles KubeArmor generates
default
- the namespace your pod is running in
tmp-writer
- the pod name
tmp-writer
(again) - the container name inside the pod
Thus to view the profile you'll use the following:
sudo cat /etc/apparmor.d/kubearmor-default-tmp-writer-tmp-writer
Expected Output:
## == Managed by KubeArmor == ##
#include <tunables/global>
## == Dispatcher profile START == ##
profile kubearmor-default-tmp-writer-tmp-writer flags=(attach_disconnected,mediate_deleted) {
## == PRE START == ##
#include <abstractions/base>
file,
network,
capability,
## == PRE END == ##
## == File/Dir START == ##
deny /tmp klw,
## == File/Dir END == ##
## == DISPATCHER START == ##
## == DISPATCHER END == ##
Now we can see a deny /tmp klw
, which blocks writes to /tmp
like we expected. This confirms how KubeArmor translates your high-level YAML policy into an actual LSM profile behind the scenes. Even easier than using aa-genprof
.
Step 3: Test the Profile
Since we've already launched our pod we can just check the logs to make sure it worked.
kubectl logs tmp-writer
Expected Output
/bin/sh: can't create /tmp/test.txt: Permission denied
Presto. We’ve now leveraged KubeArmor with Kubernetes to block certain behaviors before they ever occur. Sound familiar? That’s because we accomplished the same thing we did in Part 1 — only this time, with KubeArmor doing the heavy lifting.
Wrap Up
KubeArmor gives us an elegant way to scale Linux Security Module enforcement inside Kubernetes — no manual profile writing, no node-by-node updates.We define a policy, and KubeArmor handles the rest — generating AppArmor profiles and applying them to the right pods.
This is all about integrating enforcement directly into Kubernetes workflows. Security becomes something we can manage alongside everything else: through CRDs, YAML, and cluster-native controls.
In Part 3, we’ll take a broader look at KubeArmor considerations. We'll cover whitelisting, tuning, policy organization, and more! Then we'll conclude with some thoughts on the pros and cons of KubeArmor.
Subscribe to my newsletter
Read articles from Matt Brown directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Matt Brown
Matt Brown
Working as a solutions architect while going deep on Kubernetes security — prevention-first thinking, open source tooling, and a daily rabbit hole of hands-on learning. I make the mistakes, then figure out how to fix them (eventually).