🚀 My DevOps Journey — Week 6: Bash Got Serious!

After completing some exciting real-world Linux projects last week, Week 6 marked my official entry into Advanced Bash Scripting — and honestly, it flipped the way I think about shell scripts.
I always thought Bash scripting was just about loops, if-else, and maybe some variables. But turns out, I had only scratched the surface. This week was all about discovering what really happens behind the scenes in the shell — and it made me rethink every script I’ve ever written.
🗓️ Day 1: Bash Is Deeper Than I Thought
I began the Advanced Bash Scripting course by Juan Carlos (KodeKloud), and the very first few lessons gave me a mini reality check. It wasn’t hard — in fact, most of it felt surprisingly smooth — but it made me realize how much I had been ignoring the finer details.
For example, I never really cared about the difference between interactive and non-interactive shells. But now I know: when I type commands manually, it’s interactive, and when scripts run through cron jobs or automation, they’re non-interactive. That single difference changes how environment variables behave, which files get sourced, and how prompts work.
Even the shebang line:
#!/usr/bin/env bash
I only knew the !/bin/bash
shebang now I know why it is used in an internal perspective and how we can customize shebang for different bash or other things . It ensures your script runs with the expected interpreter, no matter which system you're on. Small things, big impact.
Then came something that really blew my mind: the difference between built-in and external commands. I never thought echo
and cat
could behave differently — but it turns out:
type echo # built-in
type cat # external
Built-ins are handled directly by the shell — no extra process. That means they're faster and more efficient, especially in loops. This was one of those “why didn’t I know this earlier?” moments.
Later, I learned how to write guard clauses to simplify scripts. Instead of:
if cond1; then
if cond2; then
do_stuff
fi
fi
We can just flip the logic:
[[ ! cond1 ]] && exit 1
[[ ! cond2 ]] && exit 1
do_stuff
Much cleaner.
I also finally understood that [[ ... ]]
is not just a fancy [ ... ]
— it’s a Bash keyword with more features like regex support and pattern matching. Definitely switching to it from now on.
One part that was really fun was using strace
to trace syscalls and watching how read()
works byte by byte. I even followed the lifecycle of a process using PIDs, foreground/background behavior — all of which was theoretical for me earlier but felt real now.
🗓️ Day 2: Redirections, Streams & Mind-Blowing File Descriptors
This day was intense. It felt like entering a different layer of Bash — dealing with stdin, stdout, stderr, and file descriptors.
Sure, I already knew stuff like:
echo "hello" > file.txt
ls -z 2> errors.txt
But I never understood things like 2>&1
, >&2
, or &>
. And I’ll be honest — this part tripped me up for a while.
I kept confusing:
>&2
: send stdout to stderr2>&1
: redirect stderr to wherever stdout is going&>
: Bash shortcut to send both streams together
It didn’t help that the order matters so much! I ran a few test cases and finally got it:
# WRONG: stderr still goes to terminal
cmd 2>&1 > out.txt
# RIGHT: both go to out.txt
cmd > out.txt 2>&1
That tiny difference taught me the value of careful redirection. Without understanding this, my scripts were silently failing in the past — and I never knew why.
Then I explored /dev/null
, which acts like a black hole:
cmd > /dev/null 2>&1
Super useful for suppressing noisy output — especially in cron jobs or background services.
Next, I got into heredocs:
cat <<EOF
Line 1
Line 2
EOF
Previously I used echo loops to write files — but heredocs make it so much cleaner. I even tested automating file creation on remote servers over SSH using heredocs. No need to nano
into remote terminals anymore. Felt like a superpower.
But the real “wow” moment was understanding custom file descriptors. I thought we only had 0, 1, 2. Turns out we can use 3, 4, and more!
exec 3<> file.txt
read -n 4 <&3
echo -n "." >&3
I didn’t expect this to work — but it did. I was able to read and write to a file using FD 3. The whole concept of managing resources efficiently within a script suddenly made a lot more sense.
I also practiced using xargs
for cleaner loops:
find . -name '*.log' | xargs rm -f
Much better than writing for loops manually.
And lastly, I understood why we use:
set -o pipefail
It ensures that pipelines fail correctly. Before this, I had pipelines that looked like they succeeded but didn’t — and debugging them was a nightmare.
🗓️ Day 3: Shell Expansions — Surprisingly Easy!
Compared to Day 2, this day felt smooth and easy. I think a big reason is that I had already practiced regex while learning grep
, which is one of my favorite Linux commands. So concepts like globs felt instantly familiar.
I explored:
Brace Expansion:
echo {a..c} # → a b c
echo {1..5} # → 1 2 3 4 5
Parameter Expansion:
${var#*/} # Remove shortest prefix
${var%.txt} # Remove suffix
${#var} # Get length
${var/file/data} # Replace substring
Command Substitution:
echo "Today is $(date +%F)"
Globbing Patterns:
ls file?.txt # Match file1.txt, file2.txt
ls file[1-3] # Match file1, file2, file3
ls file[^A-C] # Exclude A to C
Being able to filter files just by using patterns — without even writing loops or using grep — felt like a cheat code. I didn’t struggle much here because regex gave me a strong foundation.
🗓️ Day 4: Subshells, Expansions, and Variables That Think
Today was all about tying everything together — especially expansions and special shell variables. And it honestly made me feel like I’ve started thinking like the shell.
Brace expansion made a comeback in more advanced form — like:
echo {a,b}{1,2}
# → a1 a2 b1 b2
Nested braces help automate filename generation or loops. I even tested user generation use-cases with:
echo {DEV,MKT,OPS}{01..04}
Then came command substitution and subshells — and here’s where I paused for a while. It wasn’t hard, but understanding that:
dir=$(pwd)
runs in a subshell (i.e., a child process) — and variables set inside that don't persist in the parent shell — was a bit of a brain twist. But once I practiced a few scenarios, it became clear.
Then I met the special variables:
$?
: last command's exit code$#
: number of arguments$0
: script name$@
,$*
: all arguments$_
: last argument of the last command$$
: PID of current shell$!
: PID of last background process$-
: current shell flagsIFS
: internal field separator
At first, $@
and $*
were confusing — but once I practiced looping over them with and without quotes, the behavior made sense. I also learned that IFS defines how strings get split — and that changing it can break or fix your logic depending on how you loop over input.
This whole day felt like one giant “Aha!” moment. All the scattered concepts from the past two days — subshells, expansions, file descriptors, quoting — finally came together and clicked.
🗓️ Day 5: Variables, Arrays, Strict Mode & Beyond
Day 5 was a deep dive into Bash variables — and not just how to use them, but how to tame them.
I started with declare
, which I used to think was just another way to set a variable. Turns out, it’s a lot more. Using flags like -i
for integers, -r
for read-only, -u
for uppercase, and -l
for lowercase, I could add actual behavior to variables. I even tried assigning a string to a variable declared with -i
, and it reset to 0 — wild!
Then came arrays — and I finally understood why scripts that try to loop over strings break when they meet a space. Arrays in Bash are powerful once you use "${array[@]}"
instead of ${array[@]}
unquoted. I made that mistake early on and spent 10 minutes wondering why my “Coding Standards” string was splitting in two!
I explored both indexed and associative arrays — using custom insertions with slicing, appending using ${#array[@]}
as the index, and even using unset
to remove specific values. It honestly felt like a mini programming language inside Bash. And the coolest part was a real mini project — a lunch picker script — that I practiced as part of the KodeKloud lab.
Here’s how it worked: I had a file with a list of food places, and each time I ran the script, it would randomly pick one lunch spot, show it, remove it from the list, and save the rest back to the file. This made me understand mapfile, random indexing, unsetting array values, and writing them back using printf
. I even handled errors like file not found or empty list using a terminate
function and custom exit codes. Here's the script:
#!/usr/bin/env bash
declare -a lunch_options
work_dir=$(dirname "$(readlink -f "${0}")")
food_places="${work_dir}/food_places.txt"
readonly FILE_NOT_FOUND=150
readonly NO_OPTIONS_LEFT=180
terminate() {
local msg="$1"
local code="${2:-$FILE_NOT_FOUND}"
echo "$msg" >&2
exit "$code"
}
if [[ ! -f "$food_places" ]]; then
terminate "Error: food_places.txt file doesn't exist" "$FILE_NOT_FOUND"
fi
fillout_array() {
mapfile -t lunch_options < "$food_places"
if [[ ${#lunch_options[@]} -eq 0 ]]; then
terminate "Error: No food options left. Please add options to food_places.txt" "$NO_OPTIONS_LEFT"
fi
}
fillout_array
index=$(( RANDOM % ${#lunch_options[@]} ))
chosen="${lunch_options[$index]}"
echo "$chosen"
unset 'lunch_options[index]'
update_options() {
if [[ ${#lunch_options[@]} -eq 0 ]]; then
: > "$food_places"
else
printf "%s\n" "${lunch_options[@]}" > "$food_places"
fi
}
update_options
Strict mode (set -euo pipefail
) tied everything together. I’ve started adding it to all my scripts now — it forces me to be careful with variables and command flow. I also learned to use the no-op command :
inside empty conditionals and how to structure reusable terminate()
functions for logging and custom exit codes.
I ended the day learning about ISO 8601 timestamped logging using date -u +"%Y-%m-%dT%H:%M:%SZ"
, which felt like adding a professional signature to my scripts.
🗓️ Day 6: Text Wizards — AWK & SED in Action
Day 6 was all about awk and sed, and it felt like learning wizard spells for the command line.
Awk really impressed me — it's like a mini programming language built into the shell. I started with the basics like $1
, $2
, and NR
, NF
, but quickly moved on to full scripts. The way awk automatically splits fields and lets you access them by $1
, $2
, etc., made parsing structured text feel incredibly easy.
At first, I thought awk was just for printing columns. But then I learned about built-in variables, the -F
option to change field separators, and the -v
flag to pass external variables into awk programs. That opened up so many use cases — like filtering employees with salaries above 90K using just one-liners!
By the time I was writing BEGIN {}
and even saving reusable scripts into .awk
files with shebang lines, awk stopped feeling like a mystery and more like a friend.
I also wrote Bash–awk hybrid scripts — which let me use shell features (like CLI arguments) and still write logic in awk. This gave me so much flexibility, especially while building a salary filtering tool that grouped employees needing pay raises and high earners, dynamically using thresholds passed from the command line.
And just when I thought I had enough tools, along came sed — and I realized I’ve been using it for years without understanding its power.
The s/foo/bar/
substitution syntax was familiar, but I finally understood the flags, like g
for global replace, and how to target only specific lines using addresses. The -n
option was another lightbulb moment — printing only what I want, without duplicating output.
Using sed '2d'
to delete the second line or sed -n '/Manager/p'
to print only lines containing “Manager” was surprisingly elegant. I even tested in-place editing with -i
and tried inserting new lines with i
and deleting blocks with 3,6d
.
What struck me most was how awk and sed complement each other. Awk is amazing for field-based processing, while sed shines at pattern matching and stream editing. Together, they’re unstoppable.
💡 Final Thoughts
This week completely changed how I write Bash scripts. Before this, I was just happy if a script worked. Now I care if it fails gracefully, handles inputs correctly, and if it’s efficient and readable.
I’ve stopped blindly writing loops or redirects. I’ve started quoting variables properly, using guard clauses, and thinking about stream handling from the start. I even began using set -e
, set -u
, and set -o pipefail
in all my scripts to enforce stricter, safer execution.
I still ask questions when I get stuck, and I continue to maintain handwritten notes to summarize concepts in my own words. That combo of practice + curiosity really helped this week.
And next time I see 2>&1
, I won’t panic. 😄
🔭 What’s Next?
Up next, I’m starting my AWS journey in parallel, while also planning to:
Deepen my knowledge in Git with more advanced use-cases
Complete the Python fundamentals required for automation and tooling
Explore real-world use-cases combining Bash, Git, and Python
🔗 Check out my repo and notes:
📌 GitHub: GitHub
✍️ LinkedIn: Anandhu P A
Subscribe to my newsletter
Read articles from Anandhu P A directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Anandhu P A
Anandhu P A
I’m an aspiring DevOps Engineer with a strong interest in infrastructure, automation, and cloud technologies. Currently focused on building my foundational skills in Linux, Git, networking, shell scripting, and containerization with Docker. My goal is to understand how modern software systems are built, deployed, and managed efficiently at scale. I’m committed to growing step by step into a skilled DevOps professional capable of working with CI/CD pipelines, cloud platforms, infrastructure as code, and monitoring tools