Attackers Love Your YAML: Static Kubernetes Security Analysis for DevSecOps

Rushikesh PatilRushikesh Patil
15 min read

Kubernetes has become the de facto container orchestration platform, powering critical cloud workloads across industries. However, its flexibility comes with risk: if Kubernetes manifests (YAML files defining Pods, Deployments, Services, etc.) are written insecurely, entire clusters can be compromised. In fact, a survey by Red Hat found 53% of Kubernetes incidents stemmed from misconfigurations, and a recent study identified 1,051 security misconfigurations in just 2,039 open-source manifests. To mitigate these dangers, static analysis of manifests scanning YAML files before deployment is increasingly important. By catching problems early (in code review or CI), teams can “shift left” on security and avoid costly breaches. This article offers a deep dive into common dangerous misconfigurations in Kubernetes manifests, how they can be exploited in real-world incidents, and how static analysis tools and best practices can detect and fix them.

Why Manifest Misconfigurations Matter ?

Kubernetes clusters often span multiple teams and environments, making configuration drift and errors almost inevitable. Unlike code bugs, misconfigurations frequently bypass compile-time checks and rely on runtime policies. A recent empirical study confirmed that Kubernetes manifests routinely contain security flaws. For example, missing resource limits or permissive container settings can turn a benign workload into a cluster compromise risk. In one notorious case, attackers scanned the internet and found hundreds of unprotected clusters, many of which had anonymous access enabled or insecure API endpoints and deployed cryptominers and backdoors. Another case (“IngressNightmare”, CVE-2025-1974) exploited the default, unauthenticated admission webhook of the NGINX Ingress Controller to achieve complete cluster takeover. These incidents show that misconfigurations in manifests or cluster settings invite adversaries.

Static analysis helps detect these problems automatically before deployment. A comprehensive study even created a tool called SLI-KUBE (Security Linter for Kubernetes) to scan manifests, finding 11 distinct misconfiguration categories. The authors conclude that “Kubernetes manifests [often] include security misconfigurations, which necessitates security-focused code reviews and application of static analysis”. In short, automated scanning of manifest YAML is essential to enforce security best practices early in the CI/CD pipeline.

Common Dangerous Misconfigurations:

Based on research and field experience, several misconfiguration patterns recur in manifests. Below we outline the major categories, the risks they entail, and examples from studies or incidents:

  • Missing Resource Limits (CPU/Memory):

    Not specifying resources.requests or resources.limits in Pod specs can allow a container to exhaust node resources, causing Denial-of-Service (DoS). Without limits, a runaway pod (or a malicious one) can consume all CPU/memory on a node, starving other services. CIS Kubernetes benchmarks recommend always setting resource limits.

      # Missing Resource Limits (CPU/Memory)
      apiVersion: v1
      kind: Pod
      metadata:
        name: no-resource-limits
      spec:
        containers:
          - name: unbounded-container
            image: nginx
            # No resources defined
    
      # Fixed: Resource Limits Set (CPU/Memory)
      apiVersion: v1
      kind: Pod
      metadata:
        name: with-resource-limits
      spec:
        containers:
          - name: bounded-container
            image: nginx
            resources:
              requests:
                memory: "128Mi"
                cpu: "250m"
              limits:
                memory: "256Mi"
                cpu: "500m"
    
  • No SecurityContext (RunAsUser/Non-Root):

    If a container’s securityContext is omitted or not hardened, it defaults to running as root with all privileges. The studied manifests often lacked any securityContext settings. This means no user namespace isolation, any container vulnerability gives the attacker root inside the container (and possibly on the host). A hardened manifest should set runAsNonRoot: true, readOnlyRootFilesystem: true, and other restrictions. Absence of a securityContext provides malicious users the opportunity to gain access into the Kubernetes cluster.

      # No SecurityContext (RunAsUser/Non-Root)
      apiVersion: v1
      kind: Pod
      metadata:
        name: no-security-context
      spec:
        containers:
          - name: root-container
            image: nginx
    
      # Fixed: SecurityContext (RunAsUser/Non-Root)
      apiVersion: v1
      kind: Pod
      metadata:
        name: secure-context
      spec:
        containers:
          - name: non-root-container
            image: nginx
            securityContext:
              runAsNonRoot: true
              runAsUser: 1000
              readOnlyRootFilesystem: true
    
  • Host Namespace Sharing (hostNetwork, hostPID, hostIPC):

    Enabling hostNetwork: true attaches the Pod directly to the host’s network namespace. The cluster study explains that a pod on the host network can see all host network interfaces and traffic, allowing attackers to sniff or intercept traffic. Similarly, hostPID: true lets the pod share the host PID namespace. Attackers can then list and target host processes, potentially using tools like nsenter to jump to the host’s PID 1 and spawn a root shell. Likewise, hostIPC: true gives the pod access to the host’s inter-process communication (shared memory, message queues, etc.), breaking isolation. All of these flags drastically weaken container boundaries. Research finds any of these enabled puts workloads at grave risk. In practice, automated scanning should flag any pod using hostNetwork, hostPID, or hostIPC as highly suspicious.

      # Host Namespace Sharing (hostNetwork, hostPID, hostIPC)
      apiVersion: v1
      kind: Pod
      metadata:
        name: host-namespace
      spec:
        hostNetwork: true
        hostPID: true
        hostIPC: true
        containers:
          - name: insecure-container
            image: nginx
    
      ---
    
      # Fixed: No Host Namespace Sharing
      apiVersion: v1
      kind: Pod
      metadata:
        name: isolated-namespaces
      spec:
        hostNetwork: false
        hostPID: false
        hostIPC: false
        containers:
          - name: safe-container
            image: nginx
    

    Privileged Containers and Excessive Capabilities:

    Setting securityContext.privileged: true effectively disables most container restrictions, giving the container almost-root access to the host. A container with privileged: true can mount host devices (e.g. /dev), load kernel modules, or use cgroups exploits to escape. As one expert put it, privileged: true is “the most dangerous flag in the history of computing”. Even without full privileged, adding powerful Linux capabilities (like CAP_SYS_ADMIN or CAP_SYS_MODULE) to a container lets it perform host-level admin actions. For example, CAP_SYS_ADMIN is so powerful it can facilitate container breakouts. Static checks must warn on any privileged context or cap_add: ["SYS_ADMIN"].

      # Privileged Containers and Excessive Capabilities
      apiVersion: v1
      kind: Pod
      metadata:
        name: privileged-pod
      spec:
        containers:
          - name: dangerous
            image: nginx
            securityContext:
              privileged: true
              capabilities:
                add:
                  - SYS_ADMIN
    
      ---
    
      # Fixed: No Privileged Mode or Excessive Capabilities
      apiVersion: v1
      kind: Pod
      metadata:
        name: restricted-container
      spec:
        containers:
          - name: least-privilege
            image: nginx
            securityContext:
              privileged: false
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                  - ALL
    

    The flag allowPrivilegeEscalation Enabled:

    By default, Linux may allow a process to gain extra privileges (e.g. via SUID binaries). Kubernetes allowPrivilegeEscalation flag should be set false to block this. If left true, a root process inside a container could spawn a child that escapes or gains host rights. The study explicitly advises always setting allowPrivilegeEscalation: false for pods This is a low-hanging fruit: a simple static rule can enforce it.

      # allowPrivilegeEscalation Enabled
      apiVersion: v1
      kind: Pod
      metadata:
        name: allow-priv-esc
      spec:
        containers:
          - name: escalatable
            image: nginx
            securityContext:
              allowPrivilegeEscalation: true
    
      ---
    
      # Fixed: allowPrivilegeEscalation Disabled
      apiVersion: v1
      kind: Pod
      metadata:
        name: no-privilege-escalation
      spec:
        containers:
          - name: secure-process
            image: nginx
            securityContext:
              allowPrivilegeEscalation: false
    
  • HostPath/Docker Socket Mounts:

    Mounting host paths (especially /, /etc, or /var/run/docker.sock) into a container breaches host integrity. For instance, including - mountPath: /host, hostPath: { path: / } in a Pod gives the pod read/write access to the entire host filesystem. The Bishop Fox “Bad Pod” example shows how a pod that mounts the host (with privileged and other flags) can chroot into the host and become root. Similarly, exposing Docker’s daemon socket (/var/run/docker.sock) inside a container is critically dangerous, any process can then talk to the Docker daemon and spin up new privileged containers or steal other containers’ data. The research paper notes that mounting /var/run/docker.sock allows an attacker to create containers or images at will. Static analysis must flag any use of hostPath (especially to /) or volumeMount of docker.sock.

      # HostPath/Docker Socket Mounts
      apiVersion: v1
      kind: Pod
      metadata:
        name: hostpath-docker-sock
      spec:
        containers:
          - name: host-mounted
            image: nginx
            volumeMounts:
              - mountPath: /host
                name: host-root
              - mountPath: /var/run/docker.sock
                name: docker-sock
        volumes:
          - name: host-root
            hostPath:
              path: /
          - name: docker-sock
            hostPath:
              path: /var/run/docker.sock
    
      # Fixed: No HostPath or Docker Socket Mounts
      apiVersion: v1
      kind: Pod
      metadata:
        name: no-hostpath
      spec:
        containers:
          - name: safe-volumes
            image: nginx
            volumeMounts: []
        volumes: []
    
  • Hard-coded Secrets:

    Embedding plain-text passwords, API keys, or certificates directly in manifests (e.g. as env: { name: DB_PASSWORD, value: "mypassword" }) is a grave security flaw. The study found numerous hard-coded secrets and notes this is a top-25 CWE weakness. Real breaches have occurred from this: e.g. Uber’s 2019 breach and other high-profile leaks were traced to secrets in code. Static checks should scan YAML for fields like password, token, auth, secret, etc. and treat them suspiciously. Use of Kubernetes Secrets or external vaults is strongly recommended instead.

      # Hard-coded Secrets
      apiVersion: v1
      kind: Pod
      metadata:
        name: hardcoded-secrets
      spec:
        containers:
          - name: insecure
            image: nginx
            env:
              - name: DB_PASSWORD
                value: "SuperSecret123"
    
      # Fixed: No Hard-coded Secrets (Using Kubernetes Secrets)
      apiVersion: v1
      kind: Pod
      metadata:
        name: secret-injection
      spec:
        containers:
          - name: secure-app
            image: nginx
            env:
              - name: DB_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: db-secrets
                    key: password
    
  • Insecure HTTP (No TLS):

    Any manifest that communicates with an external service over plain HTTP (e.g. url: "http://elasticsearch:9200") risks interception. The study highlights that using http:// URLs or disabling HTTPS in Services introduces man-in-the-middle vulnerabilities. All cluster traffic (especially between services, or to the API) should use SSL/TLS. Static analysis can look for http:// references in container configs and alert on them.

      # Insecure HTTP (No TLS)
      apiVersion: v1
      kind: Pod
      metadata:
        name: http-only-connection
      spec:
        containers:
          - name: api-caller
            image: curlimages/curl
            command: ["curl", "http://internal-api:8080"]
    
      ---
    
      # Fixed: TLS/HTTPS Only Communications
      apiVersion: v1
      kind: Pod
      metadata:
        name: secure-https-client
      spec:
        containers:
          - name: api-client
            image: curlimages/curl
            command: ["curl", "https://internal-api.company.local"]
    
  • Cluster Component Misconfigurations (Contextual):

    While not manifest-specific, some Kubernetes security issues arise from core component settings (API server, etcd, kubelet, etc.). For example, leaving the API server’s --anonymous-auth flag true (its default in older versions) can allow anyone to query cluster state. Likewise, exposing etcd without auth, or running outdated Kubernetes with known CVEs, is dangerous. Although static analysis tools focus on YAML manifests, developers should also audit API server flags and cluster API access policies. A full security review goes beyond manifest scanning to include these checks (e.g. ensure --anonymous-auth=false, lock down kubelets, etc.).

Note - The YAML examples provided here are simplified for educational purposes. In real-world production environments, configurations may vary significantly based on workloads, cluster setup, compliance needs, and organizational policies.

Case Study: Consider this insecure Pod spec from a recent blog:

yamlCopyEditapiVersion: v1
kind: Pod
spec:
  hostPID: true
  hostIPC: true
  hostUsers: true
  hostNetwork: true
  containers:
  - name: nginx
    image: nginx:1.14.2
    securityContext:
      privileged: true
    volumeMounts:
      - mountPath: /host
        name: noderoot
  volumes:
  - name: noderoot
    hostPath:
      path: /

This pod shares all host namespaces (hostPID, hostIPC, hostNetwork, etc.), runs privileged, and mounts the host’s root filesystem. As Picus Security explains, it “grants the container elevated privileges, essentially giving it root access to the host,” and the hostPath mount “allows the pod to access and modify any file on the host”. In practice, an attacker with this pod could easily spawn a host shell and control the entire cluster (as the Bishop Fox “Bad Pod” exploit demonstrates).

Real-World Incidents:

Several recent incidents underscore how manifest misconfigurations are exploited -

  • Anonymous Admin Access (Aqua Nautilus, 2023):

    Aqua Security researchers found 350+ open Kubernetes API servers, often left exposed by misconfiguration. In hundreds of cases, organizations had accidentally granted system:anonymous roles arbitrary privileges. By default “anonymous user has no permissions,” but attackers observed in the wild that clusters were binding the anonymous user to admin roles. This allowed unauthenticated attackers to issue destructive API calls. The result was widespread compromise, with cryptomining and backdoors deployed on 60% of breached clusters. Mitigation: The researchers recommend never giving anonymous or system accounts privileges, and disable --anonymous-auth.

  • IngressNightmare (CVE-2025-1974):

    A critical vulnerability in the NGINX Ingress Controller allowed any pod in the cluster to call its admission webhook without authentication, gaining full cluster control. This was not a manifest bug per se, but it highlights how an insecure setup (default Ingress config) can be deadly. Rapid patching and disabling unauthenticated webhooks was urgent advice.

  • Publicly Exposed Clusters (Aqua Nautilus, 2023):

    In the Aqua study, many clusters had the kubectl proxy or unsecured ports (8001, 8080) exposed. Attackers abused these to reach the API. Again, this stems from misconfigured cluster-internal services.

  • Credential Leaks:

    Attackers frequently find Kubernetes API tokens or kubeconfigs accidentally committed or exposed. For example, if a manifest or ConfigMap contains a plaintext password, an automated spider might grab it and use it to access the cluster. Static analysis can’t catch all repository leaks, but it can prevent obvious cases (e.g. plain env.value fields).

  • Privilege Escalation by Design:

    Bishop Fox’s research demonstrates how everyday misconfigurations (privileged containers, host namespaces, hostPath) can be chained to full root on the nodebishopfox.com. Their “Bad Pods” walkthrough shows that even if only one of several flags is mis-set, a container can often still break out or escalate privileges (e.g. privileged: true + hostPID: true). The takeaway is that any overly-permissive PodSpec is a ticking time bomb.

These cases illustrate that in practice, attackers exploit exactly the misconfigurations that static analysis aims to prevent. By learning from such incidents, we can shape our checks and policies.

Static Analysis and Tools:

Thankfully, many tools can catch misconfigurations in manifest YAML before deployment. Static analysis (linting) of Kubernetes resources is now a standard part of DevSecOps:

  • KubeLinter (StackRox/Accelera):

    An open-source CLI tool that scans YAML/Helm for misconfigs. It includes checks for security and production readiness, such as ensuring resource limits, checking for privileged: true, detecting hostNetwork usage, etc. According to its docs, KubeLinter has 19 built-in checks and allows custom rules.

  • Checkov (Bridgecrew):

    Originally for Terraform, Checkov also supports Kubernetes. It uses graph-based analysis to find misconfigs. It can detect high-risk settings (privileged, allowPrivilegeEscalation, hostPath mounts, etc.) and suggests fixes.

  • Kubescape (ARMO/Kubescape):

    (Mentioned in ARMO blog) Kubescape is a CNAPP tool that can check manifests against the NSA/CISA and MITRE ATT&CK Kubernetes guidelines. It flags common misconfigs and can run in CI/CD pipelines.

  • Kubesec:

    A simple scanner that gives a security score for Kubernetes resources. It catches things like running as root or enabling host networking.

  • Polaris (Fairwinds):

    Primarily for readiness, but also scans for some security issues (missing limits, host namespaces, podSecurityContext, etc.).

  • Datree, Snyk, etc.:

    Many IaC scanners (Datree, Snyk, KICS, etc.) have Kubernetes support and can enforce policies like “no privileged containers” or “only allow-approved registries.”

  • Custom Linters/Policies:

    Some teams use policy-as-code (e.g. OPA/Gatekeeper or Kyverno) to enforce guardrails. For example, a Gatekeeper policy could forbid any new Deployment from having hostNetwork: true or privileged: true. This goes beyond one-time scanning, by enforcing rules at admission time.

A best practice is to integrate these tools into CI pipelines or GitOps workflows, so that each pull request with Kubernetes YAML is automatically linted. Even GitHub Actions exist for KubeLinter and Checkov. Many of these tools incorporate the CIS Kubernetes Benchmark and OWASP guidelines.

For example, the academic study compared static tools they found only a specialized tool (SLI-KUBE) detects all 11 categories, but popular tools find most. Incorporating multiple scanners can help: e.g. KubeLinter for host flags, Checkov for secrets, Polaris for limits, etc.

In summary, use automated scanners as your first defense. Don’t rely on manual reviews alone. As one researcher concludes, Kubernetes manifest scanning “necessitates security-focused code reviews and static analysis”.

Mitigation and Best Practices:

Beyond tooling, adhere to principle of least privilege in every manifest. The following guidelines – compiled from research findings and best practices – help close the gaps:

MisconfigurationImpactRecommended Fix/Mitigation
Host namespaces (hostNetwork, hostPID, hostIPC) enabledBreaks container isolation: allows sniffing traffic or inspecting/terminating host processes. Attackers can then jump to host.Disable these flags. Forbid hostNetwork: true, hostPID: true, hostIPC: true in Pod specs. Use Pod Security Policies or Pod Security Admission to enforce them as false.
Privileged containers or CAP_SYS_ADMINGrants container near-root powers on the host Enables any kernel/module-level escape.Avoid securityContext.privileged: true. Drop dangerous caps: e.g. add securityContext.capDrop: ["ALL"] or remove CAP_SYS_ADMIN, CAP_SYS_MODULE. Use least-privilege.
allowPrivilegeEscalation: truePermits child processes to escalate privileges, bypassing securityContext restrictions.Set allowPrivilegeEscalation: false on all containers.
Mounting Docker socket (/var/run/docker.sock)Exposes Docker daemon; attacker can spin up new containers or access host filesAvoid mounting the Docker socket in Pods. If absolutely needed (e.g. for certain CI tasks), mount it read-only and use an HTTPS-encrypted proxy.
Mounting hostPath (/)Pod can read/write entire host filesystem. As shown by Bishop Fox, attacker can chroot into host and gain root.Disallow broad hostPath mounts. Only allow specific, safe paths. Apply PSP/Gatekeeper to block mounts of path: "/" or critical directories.
Hard-coded secrets (passwords, tokens)Exposes credentials. Attackers (or Git bots) easily extract them and access cluster.Use Kubernetes Secrets or external vault (e.g. HashiCorp Vault, Bitnami Sealed Secrets) instead of plain fields. Do not commit real credentials in YAML.
Missing resource limitsPod can consume unlimited CPU/memory and trigger node OOM/CPU hogging (DoS).Always specify resources.requests and limits for CPU and memory on each container. Enforce via admission or CI checks.
No Pod SecurityContextOmits default security hardening (e.g. runAsNonRoot, FS group, seccomp), making pods all-root by default.Provide a sensible securityContext (e.g. runAsNonRoot: true, runAsUser: 1000, readOnlyRootFilesystem: true). Use Pod Security Standards (e.g. restricted profiles) or policies to require a baseline context.
Insecure HTTP URLsServices communicating over plaintext HTTP can be eavesdropped or MITM’d.Require HTTPS/TLS for all in-cluster and external connections (use https:// URIs). Enable SSL in all Ingress, API, and service configs. Scan for http:// in YAML.
Anonymous/Kubeconfig leaksCommitting kubeconfig or allowing anonymous access can give attackers direct API/admin access (as seen in Aqua/Aqua reports).Never bind system:anonymous to any privileged role. Set --anonymous-auth=false on the API server. Keep kubeconfigs and tokens out of source control (use .gitignore, short-lived tokens).

By treating this table as a checklist during code reviews or gatekeeper policies, teams can cover the most critical bases. For example, the Kubernetes CIS Benchmark itself enforces many of these (e.g. disallowing privileged containers, enforcing resource quotas). Regular audits using security scanners (e.g. Kubescape, kube-bench) can verify compliance with these rules.

Finally, remember that policies should be enforced continuously. Static analysis is great for catching new or changed manifests, but clusters evolve. Enable runtime controls (NetworkPolicies, audit logs, identity management) and keep Kubernetes itself up-to-date to mitigate vulnerabilities. The Picus report even recommends auditing logs and disabling anonymous access as part of a holistic defense.

Conclusion:

Misconfigured Kubernetes manifests are a pervasive and insidious threat. As container orchestration grows more widespread, so do opportunities for attackers to exploit simple YAML mistakes. The good news is that these vulnerabilities are preventable. By adopting a “manifest static analysis” mindset – using automated scanners, following security best practices, and keeping up with threat intelligence – teams can dramatically reduce risk.

From a procedural standpoint, integrate tools like KubeLinter, Checkov, Kubescape, KubeScore, or policy engines (Gatekeeper, Kyverno) early in your CI/CD pipeline. Keep a sharp eye on the categories above, and use the summary checklist to validate any Kubernetes resource before it hits production. In doing so, you ensure your cluster is not the low-hanging fruit for the next attacker, but a well-fortified foundation for your workloads.

References:

The recommendations and examples here are drawn from recent academic studies & security blogs. Each citation links to evidence supporting the analysis.

vtechworks.lib.vt.edu

vtechworks.lib.vt.edu

picussecurity.com

csoonline.com

bishopfox.com

medium.compicussecurity.com

https://akondrahman.github.io/files/papers/tosem-k8s.pdf#:~:text=VII,used%20to%20spin%20up%20any

3
Subscribe to my newsletter

Read articles from Rushikesh Patil directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Rushikesh Patil
Rushikesh Patil

Cyber Security Enthusiast