Day #4 - Shell Scripting Project: Listing AWS Resources with Shell Scripts

ArnabArnab
12 min read

Hey, everyone! Hope things have been going well. And... hope this blog isn’t arriving too soon after the last one — but when something clicks, it’s better to keep going, right?

In our previous blog, we spent some time learning how to talk to the Linux shell — how to pass it arguments, guide it through conditionals, and teach it to loop when needed. It was a foundational step, and we ended with something simple, yet powerful: a script that could react to user input and behave logically.

In this post, we’ll write a working shell script that can talk to your AWS account. Specifically, we’ll teach it to list the active resources in a given region, for a given AWS service — like EC2 or S3.

This kind of script might seem small, but in DevOps teams, it's quite common — often set to run as a cron job, sending a daily email report to keep track of what’s running (and costing money). And it’s a practical way to see how your bash knowledge meets the cloud.

But beyond just writing a working script, there’s a bigger reason for doing this.

There’s often a gap between reading about concepts and knowing how to apply them. Tutorials are great, but it’s the hands-on projects — even small ones — that really help everything come together. Real confidence comes from building things, breaking them, and understanding how they work.

That’s what this blog is about.

We’ll take it slow, and every command will be explained just like before — carefully and clearly, with the same mindset: “I’m figuring this out, and I’ll explain what made sense to me.”

Oh — and if you’d like to try this out yourself, I’ll be uploading the complete script to my GitHub page. That way, you can run it, tweak it, or just explore it alongside this blog.

Let’s get started.

Starting with the Script Header

We begin the script the way most good scripts begin — with a header that explains what this file is, who wrote it, what it supports, and how it should be used.

#!/bin/bash

# This script lists all active resources for a specified AWS service in a given region
# Author: Arnab / DevOps Team
# Version: v0.1

# Supported AWS services (only these are allowed as input):
#   - ec2
#   - s3
#   - rds
#   - dynamodb
#   - iam
#   - cloudformation
#   - vpc
#   - lambda
#   - sns
#   - sqs

# Usage:
#   ./aws_resource_list.sh <region> <service_name>
#
# Example:
#   ./aws_resource_list.sh us-east-1 ec2

We’re setting the tone here — not for the machine, but for the human reading the script later. A teammate. Or a future version of yourself.

  • The shebang tells the system which shell to use

  • The author line offers a point of contact

  • The version helps track changes

  • And the usage instructions make it clear how this is meant to be run

Just like we discussed in the last blog, this kind of structure brings clarity — and prevents future confusion.

Note: This script expects two values to be passed when executing it:

  • The AWS region (like us-east-1 or ap-south-1)

  • The name of a supported AWS service (like ec2, s3, or rds)

Any unsupported or missing input will be handled with validation checks later in the script.

This small note helps avoid guessing and sets up the next part of the blog, where those inputs are validated carefully.

Step 1: Validating Inputs

  • This part checks whether the user has provided the correct number of inputs when running the script.

    Let’s say the script is named aws_resource_list.sh, and someone runs it like this:

      ./aws_resource_list.sh us-east-1 ec2
    

    Here, two pieces of information are passed:

    1. us-east-1 (the region)

    2. ec2 (the service name)

That’s exactly what the script needs.

Let’s imagine someone runs the script like this:

./aws_resource_list.sh ec2

They forgot to add the region. What happens now? If we didn’t check for this, the script would break in unexpected ways. So, the first thing we do is validate the number of arguments.

if [ $# -ne 2 ]; then
  echo "Usage: $0 <region> <service_name>"
  exit 1
fi

Let’s now understand the code line by line:

"$#" counts how many inputs were passed to the script

-ne 2 means “not equal to 2”
→ So the line [ $# -ne 2 ] is asking:

“Did the user pass something other than two inputs?”

If yes, then it prints a helpful message showing the correct way to run the script.
$0 simply means “this script’s name,” so whatever the file is called, the message will still be accurate.

And finally:

exit 1

This stops the script immediately and marks it as an error.

Step 2: Checking If AWS CLI Is Installed

Before the script can list any AWS resources, it needs a way to communicate with AWS.

Now technically, there is a way to do that manually — using something called the AWS API. But let’s be honest... it’s not easy.

To use the API directly, a script would need to:

  • Write special HTTP requests using tools like curl

  • Attach authentication tokens and headers

  • Format everything in raw JSON

  • Handle responses manually

  • And repeat all of this for each AWS service (EC2, S3, etc.)

That’s not beginner-friendly — and honestly, not fun either.

Thankfully, there’s a better way: the AWS CLI

The AWS CLI (Command Line Interface) is a tool provided by AWS that handles all of this complexity for us.
Instead of building complicated API requests, we can just write simple commands which tells AWS:

“Please show me all EC2 instances in the us-east-1 region.”

No need to worry about tokens, headers, or low-level details — the AWS CLI takes care of it behind the scenes.

So why check if CLI is installed?

Because if someone runs the script on a machine that doesn’t have the AWS CLI installed, it will break. And the error they get might not be clear.

So it’s better to check right at the beginning — and give a helpful message if the tool is missing.

if ! command -v aws &> /dev/null; then
  echo "AWS CLI is not installed. Please install it and try again."
  exit 1
fi

What’s going on here?

  • /command -v aws

This checks if a command named aws exists on your system.

The command is a built-in shell tool that tells you where a command lives.

If aws is installed, it returns the path (like /usr/bin/aws).

If not installed, it returns nothing.

  • ! command -v aws

The ! in shell means “not” — so this means:

“If the aws command is not found…”

  • &> /dev/null

This part is used to hide the output.

Normally, if aws is found, the command prints its path to the terminal.

But we don’t want to clutter the output — so we send both stdout and stderr to a black hole using /dev/null. It is a special file that discards whatever you send to it.

  • then

If aws is not found, we go into the then block to take action.echo "AWS CLI is not installed. Please install it and try again."

This prints a friendly error message to the user so they know what’s wrong.

  • exit 1

This stops the script right there.

exit 1 means the script failed (by convention, 0 means success, any other number means error).

This step is more than just a formality — it saves a lot of time later by failing fast if the environment isn’t ready.

Step 3: Verifying That AWS CLI Is Configured

Even if the AWS CLI is installed, that’s only part of the setup.
It also needs to be configured with credentials — otherwise, it won’t be able to connect to the AWS account.

When someone runs the command aws configure, the CLI asks for a few important details: the access key, the secret access key, the default region (like us-east-1), and an output format (such as JSON or plain text).

After this setup, the CLI quietly saves all that information inside a hidden folder located at ~/.aws. This folder is where the AWS CLI looks each time it runs a command — it uses the credentials stored there to authenticate and connect to the account.

That’s why the script needs to check if this folder actually exists before continuing.

Here’s the check:

if [ ! -d ~/.aws ]; then
  echo "AWS CLI is not configured. Please configure it and try again."
  exit 1
fi

Let’s break this down in plain terms.

The [ ! -d ~/.aws ] part checks whether the .aws folder is missing from the current user’s home directory.
If it’s not there, that most likely means the user hasn’t run aws configure yet — and the CLI doesn’t have any credentials to use.

In that case, the script prints a clear message explaining the problem and stops right away using exit 1.

This might feel like a small step, but it avoids a lot of confusion later. Instead of letting the script continue and fail unpredictably, it guides the user early — before anything breaks.

It’s the same principle as before: don't assume everything is set up perfectly. Check it gently, explain what’s wrong if needed, and exit with care.

Step 4: Deciding What to Do (Based on Service Name)

Now comes the heart of the script: choosing what to do based on which AWS service the user wants to check.

Let’s say the script is run like this:

./aws_resource_list.sh us-east-1 ec2

This tells the script two things:

  • Use the us-east-1 region

  • And check the EC2 service

At this point, the script needs to decide what action to take. If the service is ec2, it should run a command that lists EC2 instances. If it’s s3, it should list S3 buckets. If it’s rds, it should describe RDS databases.

Each AWS service has its own CLI command, and we need a way to match the user’s input to the correct one.

This is where the case statement comes in — it’s a built-in shell structure that works like a smart decision tree.

It checks a value (in our case, the second argument: $2) and runs the block of code that matches. It’s cleaner than writing a bunch of if-else conditions, and it’s easier to manage as the number of supported services grows.

We’ll also make sure to handle unsupported services — if the user types something invalid, the script will gently say, “This isn’t a supported option,” instead of just breaking.

Let’s look at how the case block is used in the next section.

case $2 in
  ec2)
    aws ec2 describe-instances --region "$1"
    ;;
  s3)
    aws s3 ls --region "$1"
    ;;
  rds)
    aws rds describe-db-instances --region "$1"
    ;;
  dynamodb)
    aws dynamodb list-tables --region "$1"
    ;;
  iam)
    aws iam list-users
    ;;
  cloudformation)
    aws cloudformation describe-stacks --region "$1"
    ;;
  vpc)
    aws ec2 describe-vpcs --region "$1"
    ;;
  lambda)
    aws lambda list-functions --region "$1"
    ;;
  sns)
    aws sns list-topics --region "$1"
    ;;
  sqs)
    aws sqs list-queues --region "$1"
    ;;
  *)
    echo "Invalid service name. Please choose a supported service."
    exit 1
    ;;
esac

Why not use if and else?

Technically we could, but here’s the thing:

  • if conditions are checked one by one — even if the first one matches

  • With case, the shell jumps directly to the matching block — more efficient and readable

  • It’s also easier to scale as you add more services later

Each aws command above is designed to return a list of active resources in the selected region — nothing fancy for now. We’re keeping it clean and minimal.

And yes — if you don’t remember all these AWS commands, you don’t have to. That’s what documentation is for.

You Don’t Need to Memorize These Commands

There’s absolutely no need to memorize every AWS CLI command.
Honestly — nobody does.

AWS provides excellent official documentation called the AWS CLI Command Reference.

It’s one of the most useful resources available — always up to date, and full of clear examples.

So whenever there’s a question like:

“How do I list all S3 buckets?”
“What’s the command to describe Lambda functions?”

Just look it up, copy what’s needed, and adapt it to the script.

That’s not a shortcut — that’s the standard way of working, even for experienced teams.

Script Permissions Matter

Before wrapping up, here’s one final touch — and it’s an important one.

In real-world environments, scripts aren’t just run once and forgotten.
They’re often shared, reused, and scheduled. That means it’s worth thinking about who can run, read, or change the script.

Sometimes, the goal is to let others execute the script — but not necessarily read or modify it.

That’s where file permissions come in. For example:

chmod 771 aws_resource_list.sh

This command changes who can do what with the script:

RoleNumberMeaning
Owner7read, write, execute
Group7read, write, execute
Others1execute only

In simple terms:

  • The script’s owner and group (usually the DevOps team) can read, edit, and run it

  • Everyone else can run it — but can’t view or edit the code

This might not always be needed, but when working in shared environments, it’s a great habit.

It shows that the script wasn’t just written to work — it was written to be used responsibly.

Wrapping Up

This wasn’t a complex script, but it covered a lot of important ground.

It began with a simple goal: list active resources in AWS. But along the way, it became clear how much thought goes into writing even a basic automation script that others might rely on.

Every step — from checking command-line input, to validating the environment, to handling services safely — served a purpose. Not just to make the script work, but to make it reliable, predictable, and usable by anyone on the team.

In many teams, scripts like this are quietly running every night, saving time and money without much attention. That’s what makes this small project meaningful. It’s a glimpse into the kind of behind-the-scenes work that keeps systems efficient and teams informed.

This script may change over time — with filters, logs, email reports, or integrations — but the approach will remain the same: simple, clear, and dependable.

Sometimes that’s all a script needs to be.

That’s what this was about.

Thanks for reading — and see you in the next one.

0
Subscribe to my newsletter

Read articles from Arnab directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Arnab
Arnab

Self-taught DevOps learner building in public. Sharing real-time progress, roadblocks, and the random “aha” moments. Hoping my notes help someone else feel less stuck.