🚀 Seamless Data Transfer via Rsync Between 2 Different EC2 Instances Mounted with EBS Volumes


🎯 Aim:
To securely transfer files from one EC2 instance to another using the rsync
utility, where data resides on an attached and mounted EBS volume, while preserving permissions and ensuring efficiency.
📌 Overview:
In this task, we used two EC2 instances, each with its own EBS volume mounted at /mydata
. Our goal was to transfer the contents from EC2 Instance 1 to EC2 Instance 2 using rsync
, excluding system files like lost+found
.
We ensured:
🔐 Secure SSH key-based connection
🚫 Proper exclusion of unnecessary system folders
✅ Verified and successful file transfer
🛠️ Fixed permission issues
🧱 Infrastructure Setup:
Component | Details |
EC2 Instance 1 | Source, with EBS mounted at /mydata1 |
EC2 Instance 2 | Destination, with EBS mounted at /mydata2 |
SSH Key | your key.pem |
File Transferred | ebs-test.txt |
Tool Used | rsync |
🔧 Step-by-Step Execution
Step 1️⃣: Create EC2 Instances
Launch two EC2 instances (Source & Target) in the same VPC/Subnet for easier SSH connection.
Use the Ubuntu AMI and make sure both have access to the same key pair (e.g.,
xyz.pem
).
Step 2️⃣: Create and Attach EBS Volume to Source EC2
Go to Elastic Block Store > Volumes
Create a new 2 EBS volume (e.g., 1 GB, same AZ as Source EC2)
Select the volume → Actions > Attach volume → Choose your Source EC2
Device name will be something like
/dev/xvdh
Step 3️⃣: Mount EBS Volume in Source & Destination Both EC2
✅ it's not mandatory that both source and destination have EBS volumes attached.
SSH into Source & Destination EC2 and run:
lsblk # Check if /dev/xvdf is visible
sudo mkfs -t ext4 /dev/xvdh # Format the volume
sudo mkdir /mydata # Create a mount point
sudo mount /dev/xvdh /mydata # Mount the volume
✅ Now the volume is ready to store data.
Step 4️⃣: Create Sample Data
Write a test file to check transfer:
cd /mydata
echo "This is task for EBS-Task performed by Apurv Gujjar" > ebs-test.txt
🔄 Step 5️⃣: Change Ownership of Mounted EBS Volume 👑
After mounting your EBS volume (e.g., at /mydata
), by default it's owned by the root
user. This can block the ubuntu
user from accessing or writing files — especially important for tools like rsync
.
🎯 Aim:
Grant full ownership of the mounted volume to the ubuntu
user so it can use the directory without permission issues.
🛠️ Command:
sudo chown -R ubuntu:ubuntu /mydata
🔹
-R
: Applies changes recursively
🔹ubuntu:ubuntu
: Sets user and group ownership
🔹/mydata
: Path to your mounted volume (adjust if different)
✅ Why it’s Important:
Without changing ownership, you may get Permission Denied
errors during file transfers. This step ensures smooth read/write access for the EC2 instance’s main user.
🔐 Step 6️⃣: Add Private Key to Source EC2 for Remote Access via Rsync
To allow the source EC2 instance to securely connect to the destination EC2 instance using rsync
, you need to place the destination EC2's .pem
file inside the source EC2.
🎯 Why This Is Needed?
The source initiates the rsync connection over SSH to the destination. Therefore, the source EC2 needs the destination's private key to authenticate and establish the secure connection.
🛠️ Step-by-Step:
Create a new file to store your destination’s private key:
vim [your-key-name.pem]
Paste your destination EC2’s private key (from the
.pem
file you downloaded while creating destination EC2).Save and exit (
Esc + :wq
in vim).Secure the key by updating its permissions:
chmod 400 [your-key-name.pem]
✅ Now, your source EC2 can securely connect to the destination EC2 using this key enabling rsync
to transfer files smoothly.
⚙️ Step 7️⃣: Rsync Command Execution from EC2-1 (Source)
📍 On EC2-1, run the following command to transfer the file to EC2-2 using rsync
over SSH:
rsync -avz --exclude 'lost+found' -e "ssh -i /home/ubuntu/corextech.pem" /mydata/ ubuntu@172.31.0.110:/mydata/
🧠 Command Breakdown :
rsync
→ The main utility used to synchronize files and directories between two locations.-a
(archive mode)
→ Preserves permissions, symbolic links, file ownership, and timestamps. Ideal for full backups.-v
(verbose)
→ Displays detailed output of the transfer process.-z
(compress)
→ Compresses data during transfer to reduce network load and increase speed.--exclude 'lost+found'
→ Skips thelost+found
directory (often present in ext file systems) to avoid unnecessary syncing.-e "ssh -i /home/ubuntu/corextech.pem"
→ Uses SSH for secure data transfer with a specific private key (corextech.pem
) for authentication./mydata/
→ Source directory on EC2-1 (local instance) to be synced.ubuntu@172.31.0.110:/mydata/
→ Destination path on EC2-2 (remote instance) where the data will be copied.
🔐 This command securely:
Connects to EC2-2 using SSH (
-e "ssh -i corextech.pem"
)Transfers
/mydata/ebs-test.txt
from EC2-1 to/mydata
directory on EC2-2Maintains file permissions, timestamps, and compression during transfer
📤 After running, you’ll see output like:
Step 8️⃣: Rsync Transfer Completed Successfully
After running the rsync
command from EC2-1, we accessed EC2-2 to confirm the file transfer was successful.
We verified it using the following commands in EC2-2:
ls -l /mydata
This listed the contents of /mydata
, showing that the file ebs-test.txt
was present.
cat /mydata/ebs-test.txt
✅ Success! We have now securely and efficiently transferred a file between two EC2 instances using rsync over SSH.
✅ Conclusion
By following the above steps, we successfully transferred a file from one EC2 instance to another using rsync
over SSH. This method is secure, efficient, and ideal for syncing files between servers with minimal overhead. It's a must-have skill for any DevOps or Cloud Engineer working with AWS infrastructure.
👨💻 About the Author
This series isn't just about using AWS; it's about mastering the core services that power modern cloud infrastructure.
📬 Let's Stay Connected
📧 Email: gujjarapurv181@gmail.com
🐙 GitHub: github.com/ApurvGujjar07
💼 LinkedIn: linkedin.com/in/apurv-gujjar
Subscribe to my newsletter
Read articles from Gujjar Apurv directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Gujjar Apurv
Gujjar Apurv
Gujjar Apurv is a passionate DevOps Engineer in the making, dedicated to automating infrastructure, streamlining software delivery, and building scalable cloud-native systems. With hands-on experience in tools like AWS, Docker, Kubernetes, Jenkins, Git, and Linux, he thrives at the intersection of development and operations. Driven by curiosity and continuous learning, Apurv shares insights, tutorials, and real-world solutions from his journey—making complex tech simple and accessible. Whether it's writing YAML, scripting in Python, or deploying on the cloud, he believes in doing it the right way. "Infrastructure is code, but reliability is art."