Why Is My Kubernetes Pod Stuck in Pending? A Troubleshooter’s Guide


You deploy a flawless application onto your Kubernetes cluster, refreshed your dashboard and expect to see pods springing to life. Instead, the pod status taunts you: Pending. If you’ve ever found yourself in the same boat, wondering why your pod refuses to run, this article is for you.
The many roads to Pending and their error messages
Let’s travel through the most common reasons why a Kubernetes pod might remain in the Pending state. For each pitfall, I’ll share typical error messages and actionable steps to resolve them.
1. Insufficient Resources on Cluster Nodes
Error Message:
FailedScheduling: 0/3 nodes are available: insufficient memory
Reason:
Your pod asked for more CPU or memory than any node could offer, so Kubernetes left it waiting.
Resolution:
Check resource requests and limits in your pod spec. Lower those values if feasible.
Scale your cluster by adding more nodes or upgrading existing ones.
Use
kubectl describe pod <pod-name>
and examine the Events section for details.
apiVersion: v1
kind: Pod
metadata:
name: example-pod
spec:
containers:
- name: app-container
image: example/image
resources:
requests:
memory: "2Gi"
cpu: "1"
limits:
memory: "4Gi"
cpu: "2"
Reducing the requests
or limits
when nodes cannot satisfy them can help the pod schedule.
2. Node Not Present or Not Ready
Error Message:
FailedScheduling: 0/3 nodes are available: node(s) not ready
Reason:
Sometimes your cluster nodes are not in a healthy or ready state, causing Kubernetes to withhold scheduling pods on them.
Resolution:
Check node status using
kubectl get nodes
. Nodes not inReady
state need investigation.View detailed node info with
kubectl describe node <node-name>
.Investigate node issues such as network failure, kubelet errors, or resource exhaustion.
Restart the node or fix underlying infrastructure issues.
Ensure your cluster autoscaler is working if using cloud providers.
Nodes themselves are objects you can check via:
kubectl get nodes
You might see output like:
NAME STATUS ROLES AGE VERSION
node1 NotReady <none> 10d v1.32.1
node2 NotReady <none> 10d v1.32.1
Pods will remain Pending if only NotReady nodes exist to handle them.
3. Node Selectors, Taints, or Affinity Rules
Error Message:
FailedScheduling: No nodes match pod affinity/anti-affinity
Reason:
You set rules for which nodes can host your pod. If no node fits the bill, Kubernetes can’t schedule your pod.
Resolution:
Review your affinity, anti-affinity, taints, and tolerations. Are they too restrictive?
Relax unnecessary node constraints or update node labels to match selectors.
Use
kubectl describe node <node-name>
for insight into taints and labels.
apiVersion: v1
kind: Pod
metadata:
name: pod-with-node-selector
spec:
containers:
- name: app
image: example/image
nodeSelector:
disktype: ssd
If no node has the label disktype: ssd
, the pod will stay Pending.
4. Unattainable Persistent Volumes
Error Message:
FailedScheduling: PersistentVolumeClaim "my-pvc" not found
Reason:
Your app requests a PersistentVolume, but Kubernetes cannot find or bind to the required volume.
Resolution:
Verify that the PersistentVolumeClaim exists and is spelled correctly.
Ensure the underlying storage provider is healthy and reachable.
Use
kubectl get pvc
andkubectl get pv
to inspect storage resources.
YAML Example:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
If this claim is missing or unbound, pods referencing it will remain Pending.
5. Advanced Scheduler Conflicts (Affinities and Priorities)
Error Messages:
FailedScheduling related to affinity or priority class.
Reason:
Complex scheduling rules or updates can create scenarios where the pod cannot find a suitable spot.
Resolution:
Review pod priorities and update the cluster configuration if necessary.
Monitor scheduling events with
kubectl get events
.
How to Diagnose a Pending Pod
Sometimes, the mystery deepens. At these moments, you need the right detective tools:
Use
kubectl describe pod <pod-name>
to expose the root cause in the Events section.Check the status and resource allocations of nodes with
kubectl describe node <node-name>
.Audit the pod’s YAML for unsatisfiable requests.
Review cluster autoscaling and node health.
Conclusion: Turning Pending into Running
Every Pending pod holds a clue: an error message, an event, a warning somewhere in its output. By methodically inspecting specs, events, and cluster resources, you can crack the case and watch your pods leap into action. Every Pending state is a puzzle, waiting to be solved with the right approach and a little perseverance.
Subscribe to my newsletter
Read articles from Muskan Agrawal directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Muskan Agrawal
Muskan Agrawal
Cloud and DevOps professional with a passion for automation, containers, and cloud-native practices, committed to sharing lessons from the trenches while always seeking new challenges. Combining hands-on expertise with an open mind, I write to demystify the complexities of DevOps and grow alongside the tech community.