Kubernetes Runtime Enforcement with KubeArmor

Table of contents
- 1. KubeArmor in the Real World: What Actually Gets Enforced
- 2. When Policy Scope Breaks Your Pods (and What to Do About It)
- 3. Profiling What Actually Happens (Before You Block It)
- 4. Scaling Policy Management: Out of the Lab
- 5. Odds and Ends: Host Policies, Cluster-Wide Scopes, and Suggested Templates
- Wrap-Up: We Made It!

In this post, we’re finally wrapping up our story on KubeArmor — and Linux Security Modules (LSMs) more broadly.
In Part 1, we slogged through the brutal process of working with raw AppArmor: manual profiles, confusing syntax, and a general sense of despair.
In Part 2, we met KubeArmor — a proper runtime policy engine built for Kubernetes that actually makes LSMs usable. Suddenly, we could apply AppArmor and SELinux-style enforcement in Kubernetes using label selectors and YAML. That’s progress.
This final post is about what comes next: how you actually use KubeArmor.
What can you actually prevent?
Are there any sort of gotchas to watch out for?
What are some ways you can actually create relevant policies?
How can you break out of the lab?
How do you monitor for drift or regressions — or recover when enforcement breaks something and all signs point to an AppArmor profile you wrote last week?
1. KubeArmor in the Real World: What Actually Gets Enforced
Before we get into all the cool stuff, it’s worth asking: what does KubeArmor actually enforce?
To recap: KubeArmor lets you define policies that block or audit runtime behavior at the node level — using LSMs like AppArmor or SELinux under the hood. These policies are enforced in the kernel, scoped to containers using Kubernetes metadata (labels, namespaces, selectors).
So what kinds of things can you actually control?
File Access
Prevent reads, writes, or executions on sensitive paths like/etc/shadow
,/root
, or mounted secrets and config directories.Process Execution
Block execution of tools likesh
,nc
, ornmap
within specific containers — especially useful for reverse shell prevention.Network Activity
Deny outbound connections to specific domains, IPs, or ports. Great for catching unexpected egress, like a container calling home.
For example, here’s a policy that blocks all outbound TCP traffic on port 1337 for "demo-app" pods in the default namespace:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorNetworkPolicy
metadata:
name: block-port-1337
namespace: default
spec:
selector:
matchLabels:
app: demo-app
network:
matchProtocols:
- protocol: TCP
port: 1337
action: Block
- Capabilities
Strip dangerous Linux capabilities (e.g.,CAP_SYS_ADMIN
,CAP_NET_RAW
) even if they weren’t dropped at container launch.
It’s important to understand that KubeArmor doesn’t replace eBPF-based detection tools like Falco. It doesn’t inspect syscalls for anomalies — it enforces explicit rules you define.
Enforcement only kicks in if:
The pod matches the
selector
in the policy,The behavior matches a defined
file
,process
,network
, orcapabilities
rule,And the
action
is set toBlock
.
If all those conditions are met, the offending syscall gets denied before it happens — and an event is emitted by the kubearmor-relay
service.
2. When Policy Scope Breaks Your Pods (and What to Do About It)
I installed KubeArmor. Everything seemed fine — until one of your privileged pods stopped behaving normally. Suddenly, it couldn’t kill
processes on the host. It could see them — ps
worked — but signals like kill -9
silently failed. There were no Block
rules, no visible errors. Just... nothing.
This is where you need to learn a new AppArmor concept on the fly.
Gotcha: Why Can’t My Privileged Pod kill
Anything Anymore?
This one took longer to track down than it should’ve — so let’s go deep.
After enabling KubeArmor, one of our privileged pods running with hostPID: true
suddenly lost the ability to send signals (like kill -9
) to processes outside the container. It could still see them — but trying to interact with anything on the host (or even other pods) just failed.
There were no Block
rules in place. So I took a step back and looked at what KubeArmor was doing without me explicitly adding a policy. It created a managed AppArmor profile for my pod, which actually didn't block any capabilities as it recognized it as a privileged pod.
## == Managed by KubeArmor == ##
#include <tunables/global>
## == Dispatcher profile START == ##
profile kubearmor-... flags=(attach_disconnected,mediate_deleted) {
## == PRE START == ##
#include <abstractions/base>
## == For privileged workloads == ##
umount,
mount,
signal,
unix,
ptrace,
dbus,
file,
network,
capability,
## == PRE END == ##
## == File/Dir START == ##
## == File/Dir END == ##
## == DISPATCHER START == ##
## == DISPATCHER END == ##
## == Network START == ##
## == Network END == ##
## == Capabilities START == ##
## == Capabilities END == ##
## == Native Policy START == ##
## == Native Policy END == ##
## == POST START == ##
/lib/x86_64-linux-gnu/{*,**} rm,
## == POST END == ##
}
## == Dispatcher profile END == ##
## == FromSource per binary profiles START == ##
## == FromSource per binary profiles END == ##
## == Templates section START == ##
What Was Actually Happening
Even with a permissive profile, the container was running inside an AppArmor domain. And by default, AppArmor does not allow signaling across domains — even if the profile says signal,
and you’re running as root
with full privileges.
In AppArmor land, "no denies" still doesn't mean "no boundaries."
Your container can signal its own processes — but not anything assigned to a different profile (including unconfined
, other pods, or host-level daemons).
Confirming the Problem
From inside the container:
cat /proc/self/attr/current
Returns something like:
kubearmor-... (enforced)
Then:
cat /proc/<target-pid>/attr/current
Either fails or shows a different profile. If they don’t match, you’re sandboxed.
This Only Happens Because KubeArmor Touched the Pod
It’s worth calling out: this behavior wouldn’t happen if the pod had never been selected by KubeArmor. In many clusters, pods run with no AppArmor profile at all — which means they default to unconfined
, and everything Just Works™.
But the moment KubeArmor selects a pod (via policy or auto-discovery), it assigns a profile — even if that profile is essentially unconfined. That puts the container into an AppArmor domain. And from there, the kernel enforces isolation between that pod and any other domain, including the host. KubeArmor essentially wrote a policy to block signals.
The Fix: Unconfined
As of Kubernetes 1.31, you can explicitly opt out of AppArmor using:
securityContext:
appArmorProfile:
type: Unconfined
This removes all domain boundaries and restores full host interaction.
Here’s what I used to patch the relevant DaemonSet:
kubectl patch daemonset my-agent -n my-namespace --type='merge' -p='{
"spec": {
"template": {
"spec": {
"containers": [
{
"name": "agent-container",
"image": "your/image:tag",
"securityContext": {
"appArmorProfile": {
"type": "Unconfined"
}
}
}
]
}
}
}
}'
Then restart the pod, and you're back in business.
Tip
If you’re running privileged agents, tracing tools, or any container expected to interact with the host or other workloads at a low level — don’t rely on AppArmor’s default behavior, and don’t assume a “blank” profile means full access.
Explicitly unconfine it. It’s my gift to anyone in the same situation.
3. Profiling What Actually Happens (Before You Block It)
Before you consider blocking in the real world, you need to understand what’s going on. KubeArmor supports audit mode out of the box, which is fairly in line with Admission Controllers and other blocking type tools.
Let’s say you want to control Python execution inside a pod. Here’s what an audit-only policy might look like:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: audit-python
namespace: default
spec:
selector:
matchLabels:
app: flask-app
process:
matchPaths:
- path: /usr/bin/python3
action: Audit
Audit events are emitted to the kubearmor-relay
service and can be collected via gRPC or forwarded to a log aggregator. But if you want something interactive and nicely laid out, there’s karmor
.
Enter karmor profile
karmor
is a handy CLI tool that gives you several capabilities for working with KubeArmor — from viewing logs to profiling workloads and managing policies. We’ll explore more of it later, but for now, let’s look at how to use it for profiling.
First, installation is easy:
curl -sfL http://get.kubearmor.io/ | sudo sh -s -- -b /usr/local/bin
Once installed, karmor profile
lets you inspect runtime events at the cluster, namespace, pod, or container level. It shows four types of events:
Process
File
Network
Syscall
Each event is labeled with a result like Passed
or Permission Denied
, depending on how KubeArmor handles it. The interface is straightforward and fairly intuitive to navigate.
Example: Profiling a Pod
karmor profile -n default --pod flask-app-<id>
This command will:
Watch audit events for the
flask-app
pod in thedefault
namespace.Display file, process, network, and syscall activity in real time.
Highlight which operations are being denied or logged by KubeArmor.
For example, you might see something like:
In practice, I’ve found this most useful for debugging enforcement — confirming whether something like a Python binary or shell command is being blocked. That said, there are a couple of limitations:
The
Result
field often truncates the reason (e.g., just “Permission Denied”), so you may need to guess or check logs for more context.It doesn’t recommend fixes or suggest rules — it’s purely observational.
Still, it’s a not such a bad tool for profiling and validating behavior, especially when you're working in audit mode or just trying to understand what’s happening under the hood.
4. Scaling Policy Management: Out of the Lab
Once you're past the tinkering phase, KubeArmor policy management gets trickier. Writing a few handcrafted rules for a demo app is one thing — operationalizing them is another.
Here are a few things to keep in mind when operationalizing KubeArmor to production. Starting with the obvious: KubeArmor doesn’t manage your policies for you.
Selectors Are Everything
KubeArmor policies match pods using Kubernetes labels and selectors — as expected. I'm not here to judge your labeling strategy, but if your selectors are inconsistent or ad hoc, you’ll either over-apply a policy or miss workloads entirely.
- Use clear, consistent labels like app, env, tier, or component.
- Avoid catch-all selectors like matchLabels: {} unless you’re going full chaos mode.
- Always test selector scoping in a low-risk namespace.
Use Audit as a Step, Not a Destination
Audit mode is great — but it’s not a detection system, and it’s definitely not a substitute for something like Falco. Think of it as a policy staging workflow:
- Start with action: Audit
- Monitor hits with karmor or your log pipeline
- Trim or fine-tune as needed
- Flip to action: Block once confident
This gives you a safer path from observability to enforcement.
GitOps FTW
Store your KubeArmor policies in Git like any other infrastructure config. Enough said.
Managing Policy Drift
What happens if someone edits a policy manually in the cluster?
- GitOps helps enforce the desired state
- Some folks pair KubeArmor with Gatekeeper or Kyverno to enforce structure or constraints
- There’s no native drift detection — it’s on you to watch for it
5. Odds and Ends: Host Policies, Cluster-Wide Scopes, and Suggested Templates
Before we wrap, let’s hit a few practical details and quirks worth knowing as you move beyond the basics of KubeArmor.
Host Policies: Yes, You Can Enforce on the Node
KubeArmor doesn’t just scope to pods — it also supports host-level enforcement via kind: KubeArmorHostPolicy
. These policies target individual nodes using a nodeSelector
, like in this example for the controlplane
node:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorHostPolicy
metadata:
name: block-tmp
spec:
nodeSelector:
matchLabels:
kubernetes.io/hostname: controlplane
file:
matchDirectories:
- dir: /tmp/
recursive: true
action: Block
severity: 5
Note that you will have to patch your KubeArmor daemonset to enable this capability
This behaves just like our earlier container example — blocking access to /tmp
— but at the host level. That said, host policies feel more like a bonus feature than a core use case. So we’ll leave it there.
Cluster Policies — And a Gotcha About Scope
When you start scaling policy enforcement across environments, you’ll quickly bump into Kubernetes scoping rules. So let’s clear this up:
Namespace Policies (KubeArmorPolicy
)
These policies — the kind we worked with in Part 2 — are namespace-scoped. That means they only apply to pods in the namespace where the policy is created, even if the label selectors could technically match pods in other namespaces.
If you don’t explicitly specify a namespace when applying the policy (e.g., with -n <namespace>
), it’ll default to whatever your current context is set to.
kind: KubeArmorPolicy
metadata:
name: block-curl
spec:
selector:
matchLabels:
app: demo
process:
matchPaths:
- path: /usr/bin/curl
action: Block
You could have a dozen pods across namespaces all running curl
, but this will only affect the ones in default
if we created the policy via kubectl apply -f block-curl
. If you want the same behavior in dev
, prod
, etc.? Copy/paste time — unless…
Cluster-Wide Policies (KubeArmorClusterPolicy
)
This CRD works similarly, but it’s cluster-scoped — so it’s not tied to any single namespace. Instead of matchLabels
, it uses matchExpressions
, which can match pods based on both labels and namespaces.
kind: KubeArmorClusterPolicy
metadata:
name: block-curl-everywhere
spec:
selector:
matchExpressions:
- key: label
operator: In
values:
app: demo
process:
matchPaths:
- path: /usr/bin/curl
action: Block
This matches any pod in the cluster with app: demo
, no matter where it lives.
Suggested and Standard Policies
Not all policies need to be built from scratch. KubeArmor’s community maintains examples and a set of suggested policy templates, which can be a good starting point.
- GitHub repo – examples include sample deployments with matching policies
- KubeArmor policy templates that cover MITRE and other frameworks.
I noted that these are mostly quite dated, but still useful. So you could consider adapting these templates to your own workloads and applying them gradually.
What About karmor recommend
?
If you’re hoping for a quick way to go from audit logs to a working policy, you might try:
karmor recommend -n default --pod flask-app-xyz
“Initially, I didn’t limit it to a pod and it pulled everything from Docker, so watch disk.”
It does output a bunch of policies — but the process is a total black box. I couldn’t find any solid documentation explaining how it actually works under the hood. And frankly, I wasn’t itching to go spelunking through the codebase.
When I ran it against my Flask app, it generated 19 separate policies:
matt@controlplane:~/kubearmor_policies/out/default-flask-app$ ls -l
total 76
-rw-rw-r-- 1 matt matt 536 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-access-ctrl-permission-mod.yaml
-rw-rw-r-- 1 matt matt 612 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-cis-commandline-warning-banner.yaml
-rw-rw-r-- 1 matt matt 821 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-cronjob-cfg.yaml
-rw-rw-r-- 1 matt matt 1157 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-crypto-miners.yaml
-rw-rw-r-- 1 matt matt 859 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-file-integrity-monitoring.yaml
-rw-rw-r-- 1 matt matt 493 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-file-system-mounts.yaml
-rw-rw-r-- 1 matt matt 573 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-impair-defense.yaml
-rw-rw-r-- 1 matt matt 610 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-k8s-client-tool-exec.yaml
-rw-rw-r-- 1 matt matt 457 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-maint-tools-access.yaml
-rw-rw-r-- 1 matt matt 575 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-network-service-scanning.yaml
-rw-rw-r-- 1 matt matt 723 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-pkg-mngr-exec.yaml
-rw-rw-r-- 1 matt matt 607 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-remote-file-copy.yaml
-rw-rw-r-- 1 matt matt 577 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-remote-services.yaml
-rw-rw-r-- 1 matt matt 668 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-system-network-env-mod.yaml
-rw-rw-r-- 1 matt matt 485 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-system-owner-discovery.yaml
-rw-rw-r-- 1 matt matt 617 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-trusted-cert-mod.yaml
-rw-rw-r-- 1 matt matt 675 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-write-etc-dir.yaml
-rw-rw-r-- 1 matt matt 443 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-write-in-shm-dir.yaml
-rw-rw-r-- 1 matt matt 505 Jun 20 17:57 sfmatt-flask-vuln-demo-nonroot-latest-write-under-dev-dir.yaml
I checked out the file integrity monitoring (FIM) one. It focused on directories like /usr/bin/ and /sbin/ — mostly binary paths. That choice sort of makes sense as a baseline, but it’s not clear why these paths in particular, or how the tool derived this behavior.
Here’s an excerpt:
apiVersion: security.kubearmor.com/v1
kind: KubeArmorPolicy
metadata:
name: flask-app-sfmatt-flask-vuln-demo-nonroot-latest-file-integrity-monitoring
namespace: default
spec:
action: Block
file:
matchDirectories:
- dir: /sbin/
readOnly: true
recursive: true
- dir: /usr/bin/
readOnly: true
recursive: true
- dir: /usr/lib/
readOnly: true
recursive: true
- dir: /usr/sbin/
readOnly: true
recursive: true
- dir: /bin/
readOnly: true
recursive: true
- dir: /boot/
readOnly: true
recursive: true
message: Detected and prevented compromise to File integrity
selector:
matchLabels:
app: flask-app
severity: 1
tags:
- NIST
- NIST_800-53_AU-2
- NIST_800-53_SI-4
- MITRE
- MITRE_T1036_masquerading
- MITRE_T1565_data_manipulation
Not bad, but without transparency, it's hard to rationalize using so many out-of-the-box suggestions.
Wrap-Up: We Made It!
If you've read this whole series and gone through the code snippets — congrats. If you skimmed, I'm still glad you made it this far.
At this point, you’ve probably realized KubeArmor isn’t a tool you just toss into a cluster and forget. It’s a full-blown runtime enforcement engine — one that absolutely enforces, even if you didn’t mean it to (looking at you, default profiles).
I first heard about KubeArmor at KubeCon London. It was pitched as a Kubernetes security tool in the same conversation as Falco and Kubescape (I'll look at that mess one day). After spending the last month and a half with it, I’ve learned more than I expected — and hopefully helped you learn a bit too.
Here’s some of what stuck with me:
- LSMs rule. But implementing them in Kubernetes is a nightmare. That’s where KubeArmor comes in.
- Defaults bite. Just installing KubeArmor can apply AppArmor profiles automatically — even if you didn’t define a single policy.
- Git is required. Seriously. If your policies aren’t in version control, you’re not doing this right.
- Namespace scoping matters. Regular
KubeArmorPolicy
is namespace-scoped. Forget that, and your “working” policies won’t do a thing. - Host policies exist… but whether they’re useful depends entirely on your host risk model.
- Profiling tools help.
karmor profile
gives real-time visibility into what’s being blocked or allowed. - Recommendations are meh. The suggested policies are dated, and
karmor recommend
often feels like a hallucinating TARS spitting YAML into the void.
Of course, there’s more to explore. KubeArmor can be used more aggressively as a least-permissive access / zero-trust engine, especially if you're building hardened workloads from the start.
If there’s a theme here, it’s this:
KubeArmor doesn’t hold your hand.
It’s powerful — but takes real effort. Treat it like a security tool, not just another observability widget.
One final thought. It is a tool that has some serious potential, but it sometimes feels like an all in. That's fine at the $1-3 table at Aria, but be careful when you step up to $2-5.
Subscribe to my newsletter
Read articles from Matt Brown directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Matt Brown
Matt Brown
Working as a solutions architect while going deep on Kubernetes security — prevention-first thinking, open source tooling, and a daily rabbit hole of hands-on learning. I make the mistakes, then figure out how to fix them (eventually).