Easiest Guide To Mount And Unmount Points In Linux

What Is a Mount Point in Linux?
In Linux, a mount point is simply a folder (a directory) where a storage device or file system is attached so you can access its files. For example, when you plug in a USB drive, Linux might attach (mount) it at a folder like /media/usb
or /mnt/usb
. Once mounted, everything on that USB drive looks like it is inside that folder. This lets you use ordinary commands (like ls
or cp
) to view and copy files on the drive. In other words, a mount point is a “gateway” directory that makes another storage appear as part of your regular file tree .You can create an empty folder anywhere (for example, mkdir /mnt/mydisk
) and then mount a drive there to see and work with its files.
Installing the Mountpoint for Amazon S3 Tool
To mount an Amazon S3 bucket on your Debian/Ubuntu Linux machine, we use an AWS tool called Mountpoint for Amazon S3 (the command is mount-s3
). First, you need to download and install the mount-s3
package. Here are the steps:
Download the package with
wget
: Open a terminal and run the following command. This fetches the latest mount-s3.deb
file for 64-bit systems.$ wget https://s3.amazonaws.com/mountpoint-s3-release/latest/x86_64/mount-s3.deb
This command will output messages about connecting and then show progress as it saves
mount-s3.deb
.Install the package with
apt
: Useapt-get
to install the downloaded file. Run:$ sudo apt-get install ./mount-s3.deb
After this, the tool is installed.
Verify the installation (optional): You can check that
mount-s3
is ready by running:$ mount-s3 --version
This should print the version (for example,
1.3.1
). Seeing the version confirms the tool is installed and runnable.
Tip: Debian/Ubuntu often asks apt-get
to download any missing dependencies, but the command above handles it automatically since we used ./mount-s3.deb
.
Configuring AWS Credentials
The mount-s3 tool needs AWS credentials to access your S3 bucket. Essentially, the tool will act like an AWS user to read and write objects in the bucket. Here are two common ways to provide credentials:
Using AWS CLI (recommended for local VMs):
If you haven’t already, install the AWS Command Line Interface:
$ sudo apt-get install awscli
Run
aws configure
and enter your AWS access keys and default region. For example:$ aws configure AWS Access Key ID [None]: AWS Secret Access Key [None]: Default region name [None]: Default output format [None]:
This creates a file
~/.aws/credentials
with your keys. The mount-s3 tool will automatically use these credentials.
Mounting Your S3 Bucket
Now that the tool is installed and credentials are set up, you can mount your S3 bucket to a local folder. A typical place to mount is under /mnt
or your home directory. For example, to mount to /mnt/my-s3
:
Create a mount point folder: (If it doesn’t exist already.)
$ sudo mkdir -p /mnt/my-s3
Run the mount-s3 command: Replace
my-s3-bucket
with your actual S3 bucket name, and/mnt/my-s3
with your mount folder.$ sudo mount-s3 my-s3-bucket /mnt/my-s3
This command will run in the background and mount the bucket. After it finishes, you can use the folder
/mnt/my-s3
to access your S3 files. For example, the AWS documentation shows that aftermount-s3
, the directory (like~/mnt
) “now gives you access to the objects in your S3 bucket”
There will usually be no output if it succeeds (the prompt returns without error). If you see no errors, the bucket is mounted. You can check by listing the directory:
$ ls /mnt/my-s3
You should see the top-level objects and “folders” (prefixes) from your S3 bucket, just like a local directory.
Working with the Mounted Bucket
Once the bucket is mounted, you can work with it just like any normal folder. Here are a few things you can do:
List files: Use
ls
to list the contents of the bucket or its subdirectories.$ ls /mnt/my-s3 file1.txt notes/ image.png
Read files: Use commands like
cat
,less
, or open with a text editor to read objects from the bucket.$ cat /mnt/my-s3/file1.txt
Copy files to the bucket: You can copy or move files from your local machine into the S3 bucket. For example:
$ cp ~/localfile.txt /mnt/my-s3/ $ ls /mnt/my-s3 file1.txt localfile.txt notes/
This will upload
localfile.txt
to S3. (Under the hood,cp
uses the mount to transfer the data to S3.)Copy files from the bucket: Likewise, copy out from S3 to local:
$ cp /mnt/my-s3/notes/todo.txt ~/Downloads/
This works because Mountpoint makes the bucket look like a regular file system.
Run scripts or programs: If you have executable scripts or programs stored in the bucket, you can run them by invoking them from the mount point. For example, if there’s a script
/mnt/my-s3/
script.sh
, you can runbash /mnt/my-s3/
script.sh
. (This executes the script contents that are stored in S3, just as if the file were local.)Remove files: You can delete objects with
rm
and directories withrmdir
orrm -r
. For example:$ rm /mnt/my-s3/oldfile.log $ rmdir /mnt/my-s3/oldfolder
This will delete the corresponding object(s) from S3.
In short, any normal file operation (ls, cp, mv, rm, etc.) works on the mounted bucket, subject to how S3 works. Mountpoint maps S3 “folders” (prefixes) and objects to a virtual file system, so it feels like you’re just browsing a drive.
Note: While you can do almost everything like on a normal drive, Mountpoint does not support some exotic file-system features (like symbolic links or changing permissions). But for basic tasks (reading, writing, copying files), it behaves like a regular directory.
Unmounting the Bucket (When You’re Done)
When you finish working with the bucket, you should unmount it to disconnect. Use the umount
command on the mount point. For example:
$ sudo umount /mnt/my-s3
This will unmount the bucket and free the directory. (AWS documentation simply shows using umount
when done. After unmounting, /mnt/my-s3
will be an empty directory again.
Source: The instructions above use the official AWS Mountpoint for S3 documentation for guidance (Configuring and using Mountpoint - Amazon Simple Storage Service)
Subscribe to my newsletter
Read articles from Sannidhya Srivastava directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Sannidhya Srivastava
Sannidhya Srivastava
As a passionate DevOps Engineer, I thrive on creating seamless and efficient workflows that bridge the gap between development and operations. My expertise lies in automating processes, managing cloud infrastructure, and ensuring robust, secure, and scalable applications. With a strong focus on CI/CD pipelines and containerization, I excel at implementing solutions that enhance productivity and streamline software delivery. I am dedicated to continuous improvement, staying up-to-date with the latest industry trends and best practices to deliver top-notch results.