The Ultimate AWS EBS Workflow: EC2 Integration


📝 Abstract
In this guide, I demonstrate the complete lifecycle of working with AWS EC2 and EBS volumes starting from launching an EC2 instance and attaching a new EBS volume, to preserving data by disabling delete-on-termination.
You’ll also learn how to:
Recover EBS data after EC2 termination
Migrate EBS volumes across Availability Zones using snapshots
Copy snapshots across AWS Regions to restore data remotely
This end-to-end workflow ensures data durability, flexibility, and disaster recovery readiness for modern cloud infrastructure setups.
✅ Step 1: Launch an EC2 Instance
To begin, launch a new EC2 instance:
Go to the EC2 Dashboard in AWS Management Console.
Click on “Launch Instance”.
Select an appropriate AMI (Amazon Machine Image), such as Ubuntu or Amazon Linux.
Choose an instance type (e.g., t2.micro for testing or t3.medium for production).
In the network section, you can keep the default VPC and Subnet settings — AWS will automatically manage networking unless you have a custom setup.
Continue with key pair, storage, and security group settings.
Once launched, the EC2 instance will automatically have a root EBS volume attached.
🔒 Understanding the “Delete on Termination” Setting in EC2
When launching an EC2 instance, AWS automatically attaches a root EBS volume to store the operating system and configuration files. By default, this root volume is deleted when the instance is terminated — but you can control this behavior.
📌 Why it's important to uncheck “Delete on Termination”:
🔐 Preserve critical data: Prevents the root EBS volume from being deleted, even if the EC2 instance is terminated.
🔄 Easily recover configuration: Retain OS setup, logs, and installed packages for later use or migration.
💼 Safe for production use: Especially helpful in staging or production environments where instance termination is temporary or planned.
☁️ Avoid accidental data loss: Ensures your volume (and its data) remains available in the EBS → Volumes section after termination.
🔧 Attach to a new EC2: The preserved volume can be re-attached to any EC2 instance in the same Availability Zone for instant recovery.
As shown below, AWS automatically creates and attaches a root EBS volume when you launch an EC2 instance.
✅ Step 2: Create and Attach a New EBS Volume to Your EC2 Instance
After launching your EC2 instance, you may need extra storage. For that, you can create a new EBS volume and attach it.
🔧 Steps to Create:
Go to EC2 → Elastic Block Store → Volumes → Create Volume
Select Volume Type, Size, and most importantly, choose the same Availability Zone as your EC2 instance
📌 EBS volumes can only be attached to EC2 instances in the same Availability Zone
🔗 Steps to Attach:
After creation, select the volume → Click Actions → Attach Volume
Choose your EC2 instance and confirm the device name (e.g.,
/dev/xvdf
)Click Attach — your volume is now connected to the instance
✅ You’ll now need to format and mount the volume inside the EC2, which we’ll do in the next step.
✅ Our new EBS volume has been successfully created and is now ready to be attached to the EC2 instance.
📌 Go to Actions → Attach Volume, then select your EC2 instance to attach the volume.
✅ Step 3: Connect to EC2 and Mount the Attached EBS Volume
Once your EBS volume is attached, you need to connect to your EC2 instance and prepare the volume for use.
💻 Steps to Check and Mount:
SSH into your EC2 instance using terminal or any SSH client
(e.g., using.pem
key file)Run the following to verify attached volumes:
lsblk
🧩 Step 4: Format the EBS Volume with ext4 File System
sudo mkfs -t ext4 /dev/xvdd
mkfs
: Stands for “make file system” — used to format the disk.-t ext4
: Specifies the ext4 file system type (widely used in Linux)./dev/xvdd
: The name of the newly attached EBS volume.
📌 This command prepares the volume for storing files by formatting it.
📁 Step 5: Create a Mount Point
sudo mkdir /mydata
mkdir
: Command to make a new directory./mydata
: This directory will act as the mount point for the EBS volume.
📌 This is where your EBS volume will be accessible from.
🔗 Step 6: Mount the Volume
sudo mount /dev/xvdd /mydata
mount
: Command to attach the volume to a directory./dev/xvdd
: The formatted EBS volume./mydata
: The directory where the volume will be mounted.
📌 This makes the volume usable like a local disk through
/mydata
.
📊 Step 7: Verify the Mount Status
df -h
df
: Disk free — shows storage usage.-h
: Human-readable format (GB/MB).
📌 This helps confirm whether the EBS volume is mounted and shows its usage.
📝 Step 8: Write Data into the Mounted Volume
echo "This is task for EBS performed by Apurv Gujjar" | sudo tee /mydata/ebs-test.txt
echo
: Prints the given message.| sudo tee
: Pipes the message into a file with root permissions./mydata/ebs-test.txt
: The file to create inside the mounted volume.
📌 This command creates a test file in the EBS volume to validate it's writable.
🔍 Step 9: Check Stored Data
ls /mydata
ls
: Lists all files and folders in the directory./mydata
: The mount point where the volume is attached.
cat /mydata/ebs-test.txt
📌 Helps you verify that the file (
ebs-test.txt
) exists.cat
: Used to display file content in terminal./mydata/ebs-test.txt
: The test file we just created.
📌 Confirms the file contains the correct data you wrote earlier.
👨💻 About the Author
This project is a deep dive into the AWS ecosystem designed to strengthen my foundation in cloud-native architecture, automation, and service integration using only AWS services.
From launching EC2 instances, managing storage with S3 and EBS, configuring IAM for secure access, setting up VPCs and subnets, to automating infrastructure with CloudFormation each service I used brought real-world relevance and clarity to cloud concepts.
This series isn't just about using AWS; it's about mastering the core services that power modern cloud infrastructure.
📬 Let's Stay Connected
📧 Email: gujjarapurv181@gmail.com
🐙 GitHub: github.com/ApurvGujjar07
💼 LinkedIn: linkedin.com/in/apurv-gujjar
💡 If you found this project useful, or have any suggestions or feedback, feel free to reach out or drop a comment I’d love to connect and improve.
This is just the beginning many more builds, deployments, and learnings ahead.
Subscribe to my newsletter
Read articles from Gujjar Apurv directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Gujjar Apurv
Gujjar Apurv
Gujjar Apurv is a passionate DevOps Engineer in the making, dedicated to automating infrastructure, streamlining software delivery, and building scalable cloud-native systems. With hands-on experience in tools like AWS, Docker, Kubernetes, Jenkins, Git, and Linux, he thrives at the intersection of development and operations. Driven by curiosity and continuous learning, Apurv shares insights, tutorials, and real-world solutions from his journey—making complex tech simple and accessible. Whether it's writing YAML, scripting in Python, or deploying on the cloud, he believes in doing it the right way. "Infrastructure is code, but reliability is art."