🚀 Using Amazon EFS as Persistent Storage in Amazon EKS with eksctl and OIDC


In this tutorial, you'll learn how to set up Amazon EFS as persistent storage for your EKS workloads using the EFS CSI Driver, eksctl
, and IAM OIDC integration. I'll provision everything from scratch—including IAM roles, EFS file systems, and static volume provisioning—and mount it in a simple containerized app.
📌 Prerequisites
AWS CLI &
eksctl
configuredAn existing Amazon EKS cluster
IAM permissions to create roles, EFS, and EC2 security groups
Kubernetes
kubectl
configured with your EKS context
🧠 1. Associate OIDC Provider to Your EKS Cluster
Amazon EFS CSI driver uses IAM roles for service accounts (IRSA). To enable this, we need to associate the OIDC provider with your cluster:
export cluster_name=my-cluster
eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve
🔐 2. Create IAM Role and Service Account for the EFS CSI Driver
We’ll create an IAM role with AmazonEFSCSIDriverPolicy
and bind it to a Kubernetes service account:
export role_name=AmazonEKS_EFS_CSI_DriverRole
eksctl create iamserviceaccount \
--name efs-csi-controller-sa \
--namespace kube-system \
--cluster $cluster_name \
--role-name $role_name \
--role-only \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEFSCSIDriverPolicy \
--approve
Now patch the trust policy to support wildcard service account naming (EFS add-on compatibility):
TRUST_POLICY=$(aws iam get-role --output json --role-name $role_name --query 'Role.AssumeRolePolicyDocument' | \
sed -e 's/efs-csi-controller-sa/efs-csi-*/' -e 's/StringEquals/StringLike/')
aws iam update-assume-role-policy --role-name $role_name --policy-document "$TRUST_POLICY"
📦 3. Install the Amazon EFS CSI Driver Add-on
From the AWS Console:
Go to EKS → Clusters → Add-ons → Get more add-ons
Search for EFS CSI Driver
Choose the EFS driver with no IAM service account, since we created one manually
Install the driver
📂 4. Provision Amazon EFS File System for EKS
Get your VPC ID and CIDR range
vpc_id=$(aws eks describe-cluster \
--name $cluster_name \
--query "cluster.resourcesVpcConfig.vpcId" --output text)
cidr_range=$(aws ec2 describe-vpcs \
--vpc-ids $vpc_id \
--query "Vpcs[].CidrBlock" --output text)
Create security group for EFS
security_group_id=$(aws ec2 create-security-group \
--group-name MyEfsSecurityGroup \
--description "My EFS SG" \
--vpc-id $vpc_id \
--output text)
aws ec2 authorize-security-group-ingress \
--group-id $security_group_id \
--protocol tcp \
--port 2049 \
--cidr $cidr_range
Create the EFS file system and mount target
file_system_id=$(aws efs create-file-system \
--region region-code \ #change the region code
--performance-mode generalPurpose \
--query 'FileSystemId' \
--output text)
Find the subnets in your VPC:
aws ec2 describe-subnets \
--filters "Name=vpc-id,Values=$vpc_id" \
--query 'Subnets[*].{SubnetId: SubnetId,AvailabilityZone: AvailabilityZone,CidrBlock: CidrBlock}' \
--output table
Create mount targets for subnets your nodes are in:
aws efs create-mount-target \
--file-system-id $file_system_id \
--subnet-id subnet-XXXXXXX \
--security-groups $security_group_id
📁 5. Configure Static Provisioning with Kubernetes Manifests
We'll use the following 4 manifests:
📦 storageclass.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: efs-sc
provisioner: efs.csi.aws.com
📦 pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
storageClassName: efs-sc
persistentVolumeReclaimPolicy: Retain
csi:
driver: efs.csi.aws.com
volumeHandle: fs-xxxxxxxx # Replace with your EFS FileSystemId
📦 claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: efs-claim
spec:
accessModes:
- ReadWriteOnce
storageClassName: efs-sc
resources:
requests:
storage: 5Gi
📦 pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: efs-app
spec:
containers:
- name: app
image: rockylinux:8
command: ["/bin/sh"]
args: ["-c", "while true; do echo $(date -u) >> /data/out.txt; sleep 5; done"]
volumeMounts:
- name: persistent-storage
mountPath: /data
volumes:
- name: persistent-storage
persistentVolumeClaim:
claimName: efs-claim
🚀 Deploy Everything!
kubectl apply -f storageclass.yaml
kubectl apply -f pv.yaml
kubectl apply -f claim.yaml
kubectl apply -f pod.yaml
Verify that the volume is bound:
kubectl get pv -w
And check your pod:
kubectl get pods
Inspect the file system:
kubectl exec -ti efs-app -- tail -f /data/out.txt
✅ Conclusion
With just a few commands and manifests, you've connected Amazon EFS to your EKS workloads using the EFS CSI driver. This enables highly available, shared storage between pods and makes your applications more stateful and resilient.
If you want to go further, consider:
Dynamic provisioning with EFS access points
Using Helm charts for EFS driver installation
Monitoring I/O and performance with CloudWatch
Subscribe to my newsletter
Read articles from Navya A directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Navya A
Navya A
👋 Welcome to my Hashnode profile! I'm a passionate technologist with expertise in AWS, DevOps, Kubernetes, Terraform, Datree, and various cloud technologies. Here's a glimpse into what I bring to the table: 🌟 Cloud Aficionado: I thrive in the world of cloud technologies, particularly AWS. From architecting scalable infrastructure to optimizing cost efficiency, I love diving deep into the AWS ecosystem and crafting robust solutions. 🚀 DevOps Champion: As a DevOps enthusiast, I embrace the culture of collaboration and continuous improvement. I specialize in streamlining development workflows, implementing CI/CD pipelines, and automating infrastructure deployment using modern tools like Kubernetes. ⛵ Kubernetes Navigator: Navigating the seas of containerization is my forte. With a solid grasp on Kubernetes, I orchestrate containerized applications, manage deployments, and ensure seamless scalability while maximizing resource utilization. 🏗️ Terraform Magician: Building infrastructure as code is where I excel. With Terraform, I conjure up infrastructure blueprints, define infrastructure-as-code, and provision resources across multiple cloud platforms, ensuring consistent and reproducible deployments. 🌳 Datree Guardian: In my quest for secure and compliant code, I leverage Datree to enforce best practices and prevent misconfigurations. I'm passionate about maintaining code quality, security, and reliability in every project I undertake. 🌐 Cloud Explorer: The ever-evolving cloud landscape fascinates me, and I'm constantly exploring new technologies and trends. From serverless architectures to big data analytics, I'm eager to stay ahead of the curve and help you harness the full potential of the cloud. Whether you need assistance in designing scalable architectures, optimizing your infrastructure, or enhancing your DevOps practices, I'm here to collaborate and share my knowledge. Let's embark on a journey together, where we leverage cutting-edge technologies to build robust and efficient solutions in the cloud! 🚀💻