Amazon EFS : Elastic File System

Sudha YadavSudha Yadav
6 min read

Introduction

Today we will delve into Amazon Elastic File System (EFS) a scalable, fully-managed and elastic NFS file system for use with AWS Cloud services and on-premises resources. EFS is designed to provide the throughput, IOPS and low latency required for a broad range of workloads from content repositories and development environments to big data and media processing applications.

In this blog we will explore the key features of Amazon EFS, how to set up and manage file systems and how to integrate EFS with EC2 instances. We'll provide detailed examples to help you get started with Amazon EFS effectively.

Understanding Amazon EFS

Key Features of Amazon EFS

  1. Scalability : EFS automatically scales your file system storage up or down as you add or remove files providing virtually unlimited storage capacity.

  2. Performance : EFS offers consistent low latencies and high throughput for a wide range of workloads.

  3. Durability and Availability : Data stored in EFS is redundantly stored across multiple Availability Zones (AZs) in an AWS Region.

  4. Integration : EFS integrates seamlessly with other AWS services including Amazon EC2, AWS Lambda and AWS Backup.

  5. Security : EFS provides robust security features including VPC network isolation, AWS IAM for access control and encryption of data at rest and in transit.

Setting Up Amazon EFS

Step 1 : Create an EFS File System

To create an EFS file system follow these steps :

  1. Open the Amazon EFS console at https://console.aws.amazon.com/efs/ .

  2. Click "Create file system".

  3. Choose your VPC and the subnets where you want to create mount targets.

  4. Select the throughput mode and lifecycle management settings.

  5. Review and create the file system.

Example using AWS CLI :-

aws efs create-file-system \
    --creation-token my-efs \
    --performance-mode generalPurpose \
    --throughput-mode bursting

Step 2 : Create Mount Targets

Mount targets enable your EC2 instances to access the EFS file system. To create mount targets follow these steps :-

  1. In the Amazon EFS console select the file system you created.

  2. Click "Add mount target".

  3. Select the VPC and subnets.

  4. Choose the security groups.

  5. Click "Create mount targets".

Example using AWS CLI :-

aws efs create-mount-target \
    --file-system-id fs-12345678 \
    --subnet-id subnet-12345678 \
    --security-groups sg-12345678

Step 3 : Mount the File System on EC2 Instances

To mount the EFS file system on your EC2 instances follow these steps :-

  1. Connect to your EC2 instance via SSH.

  2. Install the NFS client package.

For Amazon Linux :-

sudo yum install -y amazon-efs-utils

For Ubuntu :-

sudo apt-get install -y nfs-common
  1. Create a directory to mount the file system.
sudo mkdir /mnt/efs
  1. Mount the EFS file system.
sudo mount -t efs fs-12345678:/ /mnt/efs
  1. To make the mount persistent add the following entry to /etc/fstab.
fs-12345678:/ /mnt/efs efs defaults,_netdev 0 0

Managing Amazon EFS

Monitoring and Metrics

Amazon EFS provides several metrics through Amazon CloudWatch. These metrics include throughput, IOPS and storage size. You can create CloudWatch alarms to monitor these metrics and receive notifications when certain thresholds are exceeded.

Example of creating a CloudWatch alarm using AWS CLI :-

aws cloudwatch put-metric-alarm \
    --alarm-name EFS-Storage-Alarm \
    --metric-name TotalStorageBytes \
    --namespace AWS/EFS \
    --statistic Average \
    --period 300 \
    --threshold 1000000000 \
    --comparison-operator GreaterThanThreshold \
    --evaluation-periods 1 \
    --alarm-actions arn:aws:sns:us-west-2:123456789012:MySNSTopic

Backup and Restore

You can use AWS Backup to create and manage backups of your EFS file systems. AWS Backup provides a central place to automate and manage backups across AWS services.

Example of creating a backup plan using AWS CLI :-

aws backup create-backup-plan \
    --backup-plan '{"BackupPlanName": "MyBackupPlan", "Rules": [{"RuleName": "DailyBackup", "TargetBackupVaultName": "Default", "ScheduleExpression": "cron(0 12 * * ? *)", "Lifecycle": {"MoveToColdStorageAfterDays": 30, "DeleteAfterDays": 365}}]}'

Data Lifecycle Management

EFS Lifecycle Management automatically moves files that have not been accessed for a set period (e.g. 30, 60, 90 days) to a lower-cost storage class (EFS Infrequent Access).

To enable Lifecycle Management :-

  1. In the Amazon EFS console select the file system.

  2. Click "Edit lifecycle policies".

  3. Choose the policy and apply it.

Example using AWS CLI :-

aws efs put-lifecycle-configuration \
    --file-system-id fs-12345678 \
    --lifecycle-policies '[{"TransitionToIA": "AFTER_30_DAYS"}]'

Integration with EC2 Instances

NFS Mount Options

When mounting EFS file systems on EC2 instances you can use various NFS mount options to optimize performance and security.

Example of mounting with NFS options :-

sudo mount -t efs -o tls,ro fs-12345678:/ /mnt/efs
  • tls : Encrypts data in transit using Transport Layer Security (TLS).

  • ro : Mounts the file system as read-only.

EFS with Amazon ECS

You can use EFS as a persistent storage solution for Amazon Elastic Container Service (ECS). This allows your ECS tasks to share data and state across containers.

Example of creating an ECS task definition with EFS :-

{
  "family": "my-ecs-task",
  "containerDefinitions": [
    {
      "name": "my-container",
      "image": "nginx",
      "mountPoints": [
        {
          "sourceVolume": "my-efs-volume",
          "containerPath": "/usr/share/nginx/html"
        }
      ]
    }
  ],
  "volumes": [
    {
      "name": "my-efs-volume",
      "efsVolumeConfiguration": {
        "fileSystemId": "fs-12345678",
        "rootDirectory": "/"
      }
    }
  ]
}

EFS with AWS Lambda

AWS Lambda can also use EFS as a file system to store data that is shared between invocations.

Example of creating a Lambda function with EFS :-

  1. Create an EFS access point.
aws efs create-access-point \
    --file-system-id fs-12345678 \
    --posix-user Uid=1000,Gid=1000 \
    --root-directory Path=/lambda
  1. Update the Lambda function configuration.
bashCopy codeaws lambda update-function-configuration \
    --function-name my-lambda-function \
    --file-system-configs Arn=arn:aws:elasticfilesystem:us-west-2:123456789012:access-point/fsap-12345678,LocalMountPath=/mnt/efs

Best Practices for Amazon EFS

Security

  • VPC Security : Use VPC security groups and network ACLs to control access to your EFS file systems.

  • IAM Policies : Define and apply IAM policies to control access to EFS resources.

  • Encryption : Use encryption at rest and in transit to protect your data.

Performance

  • Throughput Modes : Choose the appropriate throughput mode (Bursting or Provisioned) based on your workload requirements.

  • NFS Mount Options : Use appropriate NFS mount options to optimize performance.

Cost Management

  • Lifecycle Management : Enable Lifecycle Management to automatically move infrequently accessed files to a lower-cost storage class.

  • Monitor Usage : Use CloudWatch metrics to monitor your EFS usage and set up alarms to avoid unexpected costs.

Conclusion

Amazon EFS provides a scalable, elastic and fully-managed file storage solution that integrates seamlessly with various AWS services. By following the steps and best practices outlined in this blog you can set up, manage and optimize your EFS file systems to meet the needs of your applications. Whether you're using EFS for content management, big data analytics or shared storage for containerized applications Amazon EFS offers the flexibility and performance required for a wide range of workloads.

Stay tuned for more insights in our upcoming blog posts.

Let's connect and grow on LinkedIn :Click Here

Let's connect and grow on Twitter :Click Here

Happy Cloud Computing!!!

Happy Reading!!!

Sudha Yadav

0
Subscribe to my newsletter

Read articles from Sudha Yadav directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sudha Yadav
Sudha Yadav

๐Ÿš€ Hi everyone! I'm Sudha Yadav, a DevOps engineer with a big passion for all things DevOps. I've knowledge and worked on some cool projects and I can't wait to share what I'm learning with you all! ๐Ÿ› ๏ธ Here's what's in my toolbox: Linux Github Docker Kubernetes Jenkins AWS Python Prometheus Grafana Ansible Terraform Join me as we explore AWS DevOps together. Let's learn and grow together in this ever-changing field! ๐Ÿค Feel free to connect with me for: Sharing experiences Friendly chats Learning together Follow my journey on Twitter and LinkedIn for daily updates. Let's dive into the world of DevOps together! ๐Ÿš€ #DevOps #AWS #DevOpsJourney #90DaysOfDevOps