AWS EKS, Karpenter Real-world Issue #1: 🌐 Understanding Karpenter, PVCs, and EBS Volumes in Amazon EKS


👋 Introduction
When working with Amazon EKS and the Karpenter autoscaler, handling Persistent Volumes (PVs)—especially those backed by EBS volumes—across Availability Zones (AZs) can get tricky. In this article, we’ll explore:
How Karpenter provisions nodes across AZs
What happens when PVCs request EBS volumes
What breaks when your Pods are rescheduled to different AZs
And most importantly—how to solve these issues effectively
Whether you're new to Kubernetes storage or struggling with volume scheduling problems in production, this guide will walk you through real-world scenarios and fixes in simple language.
🧠 1. Background: How PVC and PV Work in Kubernetes
PVC (PersistentVolumeClaim) is a request by a Pod for storage.
When using dynamic provisioning (with AWS EBS), the CSI driver provisions a PV.
The PV is backed by an actual EBS volume, and it is created in the same AZ as the node (if done properly).
Important: EBS volumes are AZ-bound, meaning they can only be mounted by nodes in the same Availability Zone.
⚙️ 2. Real-World Scenarios
Let’s take two real-life examples.
✅ Scenario 1: All in One AZ (Working Fine)
Karpenter provisions
node-A
in us-east-1aPod is scheduled with a PVC.
CSI driver creates an EBS volume in us-east-1a.
Pod runs happily and mounts the volume — ✅ All good.
⚠️ Scenario 2: Node Moves to Another AZ
Original node crashes or is terminated.
Karpenter provisions a new node in us-east-1b.
Pod tries to reschedule.
But... the PVC is already bound to an EBS volume in us-east-1a.
Result: Pod remains Pending with an error like:
0/1 nodes are available: 1 node(s) had volume node affinity conflict.
🔍 3. Why This Happens
This happens because of the AZ-bound nature of EBS volumes:
Once a PVC is bound to a PV (EBS), it cannot be moved to another AZ.
Karpenter doesn’t know about the volume’s AZ when scheduling new nodes.
Kubernetes cannot attach an EBS volume across AZs.
🚫 4. Misconception: Will Kubernetes Create a New PV in AZ2?
❌ No.
Once a PVC is bound to a PV, Kubernetes will not create a new PV in another AZ automatically.
To do that, you'd have to:
Create a new PVC.
Snapshot the original EBS volume.
Manually provision a new PV in AZ2 from that snapshot.
✅ 5. Correct Solution Strategies
5.1 Use volumeBindingMode: WaitForFirstConsumer
This is the most important fix.
It delays the EBS volume creation until the Pod is scheduled.
This ensures the EBS volume is created in the same AZ as the node where the Pod lands.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
5.2 Make Karpenter AZ-aware
Use requirements
in your Karpenter Provisioner to control AZ selection:
apiVersion: karpenter.sh/v1beta1
kind: Provisioner
metadata:
name: az1-provisioner
spec:
requirements:
- key: topology.kubernetes.io/zone
operator: In
values: ["us-east-1a"] # Replace with your target AZ
- You can also create multiple provisioners, one per AZ.
5.3 Use EFS Instead of EBS for Multi-AZ Access
EFS (Elastic File System) is region-wide, not AZ-bound.
Use
volumeBindingMode: Immediate
with EFS.Ideal when Pods need to run in any AZ and share storage.
volumeBindingMode: Immediate # Best for EFS
🔁 6. What if I Already Have a PVC Bound to a PV in Another AZ?
You have two options:
Option 1: Wait for a Node in the Same AZ
Ensure Karpenter can provision nodes in the AZ of the existing volume.
Pod will eventually reschedule.
Option 2: Migrate the Volume
Snapshot the old EBS volume.
Create a new volume in the new AZ.
Manually create a new PV and PVC.
Redeploy your Pod with the new PVC.
🧾 7. Full Working Example: EBS + Karpenter + PVC
Step 1: StorageClass
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ebs-sc
provisioner: ebs.csi.aws.com
volumeBindingMode: WaitForFirstConsumer
Step 2: Karpenter Provisioner
apiVersion: karpenter.sh/v1beta1
kind: Provisioner
metadata:
name: az-aware
spec:
requirements:
- key: topology.kubernetes.io/zone
operator: In
values: ["us-east-1a"]
ttlSecondsAfterEmpty: 30
providerRef:
name: default
Step 3: PVC and Pod
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-pvc
spec:
accessModes:
- ReadWriteOnce
storageClassName: ebs-sc
resources:
requests:
storage: 5Gi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-ebs
spec:
replicas: 1
selector:
matchLabels:
app: test-ebs
template:
metadata:
labels:
app: test-ebs
spec:
containers:
- name: app
image: busybox
command: ["sleep", "3600"]
volumeMounts:
- mountPath: /data
name: ebs-volume
volumes:
- name: ebs-volume
persistentVolumeClaim:
claimName: data-pvc
✅ 8. Summary Checklist
Task | Required? | Why? |
Use WaitForFirstConsumer | ✅ | Align EBS volume with Pod AZ |
Make Karpenter AZ-aware | ✅ | Prevent mismatched scheduling |
Use EFS for cross-AZ storage | Optional | Shared volumes across AZs |
Avoid expecting PVC to rebind | ✅ | Not supported across AZs |
Handle recovery via snapshots | ✅ | Manual recovery if AZ fails |
🙋 Final Thoughts
If you're using Karpenter with EKS, especially with EBS-backed PVCs, you must be aware of the zone affinity limitations. Kubernetes and Karpenter can work beautifully together — as long as your configuration is AZ-aware.
Don’t let a simple AZ mismatch stall your workloads.
UseWaitForFirstConsumer
and AZ-aware provisioning — and your storage strategy will be production-ready.
Subscribe to my newsletter
Read articles from lokeshmatetidevops1 directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

lokeshmatetidevops1
lokeshmatetidevops1
I am DevOps Specialist with over 15+ years of experience in CI/CD, automation, cloud infrastructure, and microservices deployment. Proficient in tools like Jenkins, GitLab CI, ArgoCD, Docker, Kubernetes (EKS), Helm, Terraform, and AWS. Skilled in scripting with Python, Shell, and Perl to streamline processes and enhance productivity. Experienced in monitoring and optimizing Kubernetes clusters using Prometheus and Grafana. Passionate about continuous learning, mentoring teams, and sharing insights on DevOps best practices.