Deploying and Resizing EBS Volumes on AWS: A Step-by-Step Guide for beginners

In the fast-paced world of cloud computing, AWS has emerged as a dominant force, offering a wide array of services and tools to cater to the ever-evolving needs of businesses. One such crucial feature is Elastic Block Store (EBS) volumes, which allow you to attach additional storage to your Amazon EC2 instances.

In this blog post, I'll walk you through the process of launching a Linux EC2 instance, creating an EBS volume (let's say 20 GiB), and resizing it (let's say 30 GiB).

Task 1: Launch a Linux EC2 Instance

  1. Log in to the AWS console and Click on EC2.

  2. There are no running instances. We’ll launch one. Click the Launch Instance button to start the EC2 instance creation process.

  3. Enter a name for the instance

  4. Select the Linux AMI that best suits your needs. For example, you might choose "Amazon Linux 2" or "Ubuntu Server." I'm choosing Amazon Linux OS from the suggestions.

  5. Select the instance type that matches your requirements in terms of CPU, memory, and other resources. You can choose the free tier-eligible options or scale up for more power. I keep the default instance type (t2.micro)

  6. Select one of the key pairs from the suggestions.

    Create a new one if no key pair is created for the current region.

    Click on Create key pair and download the .pem file. Later, we’ll use PuTTYgen to create a .ppk file to access the EC2 instance.

  7. In the network settings, select the check boxes corresponding to SSH and HTTP Traffic.

  8. Configure the amount of storage you want for your instance. The default 8 GB of gp2 volume should be enough for a basic web server.

  9. Review the summary of the EC2 instance and click on the Launch Instance. It’ll take a moment to initiate the instance.

  10. The newly created instance is launched in the availability zone in the US-East-1 Region. We’ll now open PuTTY & copy this Public IP to access the instance from PuTTY.

  11. Open PuTTY & paste the copied Public IP in the Host Name (or IP address).

  12. Then go to, Connection -> SSH -> Auth -> Credentials.

    Now we need a putty private key file for authentication to access the instance. So we’ll now open PuTTYgen & generate the .ppk file from the .pem file.

    Generating .ppk from .pem:

  13. Select the generated .ppk file and then click on Open.

  14. It is successfully connected to the instance. Click on Accept.

    Enter ec2-user to log in to the instance.

    The instance is ready for use.


Task 2: Create an EBS Volume and Attach It to the instance

  1. Now we’ll create an EBS volume with 20 GB of storage and attach it to the created EC2 instance. In this same EC2 page, scroll down to Elastic Block Store (EBS) and click on Volumes.

    Currently, it is showing the Root Volume Device of the instance launched in the previous task. Click on Create Volume.

  2. Enter 20 GiB for the size and select the availability zone that our current instance is running from.

    Keep the remaining options as it is and click on Create Volume.

  3. Volume is successfully created and is available for us to use.

  4. Now, we’ll attach this volume to our instance. Go to Actions -> Attach Volume

  5. Select the instance to which we want to attach our newly created volume and click on attach volume.

  6. Volume is successfully attached to our instance. Now we’ll go to our instance.

  7. Type lsblk and see the available volumes and we can see that there is no mount point for xvdf.

LSBLK command
It lists information about all available or specified block devices. It reads the sysfs filesystem and udev db to gather the information.
  1. We'll run the file command. It detected the device xvda as a volume of root partition and xvdf as just data.

The output of the file command is different for both volumes (xvda & xvdf). So, we’ll make xvdf a file system.

FILE command
It determines the file type.
  1. Use mkfs to create a filesystem.

    The mkfs command
    It builds a Linux filesystem on a device, usually a hard disk partition.
  2. We’ll make a new directory named ebs (mkdir ebs) and mount this file system.

    Now, there is a mount point for xvdf.

    The mount command
    It serves to attach the filesystem found on some devices to the big file tree.

Go to the newly created directory ebs and create a new text file.

1.txt is created. Now we have a new EBS volume attached to our EC2 instance.


Step 3: Resize the Attached Volume and make sure it reflects in the connected instance

  1. Go to AWS Console -> EC2 -> Elastic Block Storage -> Volumes -> Modify Volume. Change the size from 20 GiB to 30 GiB and click on Modify.

  2. Click on Modify.

  3. Modification is in progress and the size is still 20 GiB.

    After modification, the size is changed to 30 GiB.

  4. Now, we’ll check the device information. Here lsblk is showing 30 GiB but df is still showing 20 GiB.

    The df command
    It displays the amount of disk space available on the file system containing each file name argument.
  5. We’ll use the resize2fs command to reflect the changes.

    Now df is showing 30 GiB and we can also see the 1.txt that was created in the previous step.

    The resize2fs program
    It will resize ext2, ext3 or ext4 file systems. It can be used to enlarge or shrink an unmounted file system located on a device. If the file system is mounted, it can be used to expand the size of the mounted file system, assuming the kernel and the file system support online resizing.

And that's it! We've successfully launched a Linux EC2 instance, attached an EBS volume, and resized it to meet our requirements.

Conclusion:
AWS provides a powerful and flexible platform for cloud computing, and understanding how to work with resources like EC2 instances and EBS volumes is essential for any AWS user. In this blog post, I've walked you through the process of launching an EC2 instance, creating an EBS volume, and resizing it when needed. With this knowledge, you can confidently deploy and manage resources in your AWS environment, ensuring that your infrastructure is always in line with your evolving requirements.

0
Subscribe to my newsletter

Read articles from SIDDHANI VAMSI SAI KUMAR directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

SIDDHANI VAMSI SAI KUMAR
SIDDHANI VAMSI SAI KUMAR

I've spent over 9 years working in software development for the Indian Defense industry. I'm skilled in C++, Qt, Socket Programming, Multi-Threading, BASH Scripting, and CUDA, which have all been crucial for projects in defense. Right now, I'm learning about cloud computing and DevOps, especially focusing on AWS. I'm passionate about making software development and deployment smoother and more reliable using cloud technology. Looking ahead, I'm excited about roles in cloud computing and DevOps. I want to use my software skills and knowledge of AWS and DevOps to lead exciting projects and make a difference in the tech world.