š My DevOps Journey - Week 3: From File Manipulation to Git Workflows, systemd, Packaging and More

This week was a major level-up. I didnāt just learn commandsāI lived through real-world Linux admin and DevOps tasks. From scripting automation to SSL, rsync, systemd, source code compilation, and Git branching workflowsāevery day was packed. Here's my day-by-day journey.
š Day 1: File Comparison, Pagers & Vim
I kicked off the week learning core Linux utilities that deal with comparing and manipulating file contents. Tools like diff
, cmp
, comm
, and wc
helped me understand differences in files and analyze content quickly. I worked with columns using cut
and paste
, sorted them with sort
, and removed duplicates with uniq
. Most of these commands were already known to me while i was going through Linux basic in Week 1 and 2 bur i revisited it as a part of my LFCS Journey.
I also came across some new commands like - tac
, tail
, head
, sed
Then I explored pagers like less
and more
, learning how to scroll and search inside long logs. I practiced /search
, n
, N
, and quit with q
.
My comfort with Vim also grew, understanding modes more, navigation with hjkl
, and editing using dd
, yy
, p
, /search
for keyword searching and saving with :wq
. My understanding grew because I already knew about VI and now revisiting made it so much clearer.
š Day 2: grep, Regex, Archiving, Backups, Compression, Redirection
This was a beast of a day. I started with grep
and searched through logs and config files using flags like -i
, -v
, -r
, -w
, -o
. I found commented lines using:
grep '^#' /etc/ssh/sshd_config
Then came basic and extended regular expressions. With grep -E
, I filtered only IP address-like patterns, email patterns, and used:
grep -E '\b[0-9]{1,3}(\.[0-9]{1,3}){3}\b' /var/log/syslog
I used tar
extensively:
tar -cvf backup.tar /etc/
tar -xvf backup.tar
For compression:
gzip --keep logs.txt
bzip2 file.log
I also used rsync
for remote backups:
rsync -av /home/user/ user@192.168.1.10:/backup/
And used dd
to create a raw image of a disk:
sudo dd if=/dev/sda of=backup.img bs=1M status=progress
I mastered redirection with combinations like:
grep /var/log/syslog 2> errors.txt 2>&1
and here-docs:
cat <<EOF > message.txt
This is line 1
This is line 2
EOF
The grep command kept me thinking for hours. I practiced as much as I could on the grep command because I felt it so interesting on all the patterns that I could get from huge files and all. After learning this, I always try to do even small exercises using grep to filter my needed contents. It made my exercise solving much faster and interesting by discovering more and more patterns. Till now, grep is my favorite command in Linux š.
š Day 3: SSL Certificates, Git Workflows & Remotes
I explored TLS/SSL and created CSRs using openssl req
, and even generated self-signed certificates with:
openssl req -x509 -newkey rsa:4096 -keyout my.key -out my.crt -days 365 -nodes
I verified cert details with:
openssl x509 -in my.crt -text -noout
I already know what the commands are from my TLS/SSL learnings in week 1 and 2 but now I know what each option mean in the command itself. I now know it much more technical for the TLS Certificates.
Git was a major part of this day:
Set up global config:
git config --global user.name "Anandhu"
(This was new for me)Initialized a repo:
git init
Added and committed files:
echo "Hello World" > index.html
git add index.html
git commit -m "Initial commit"
Most of these was already known to me but had some new learning on working with the cloud repository like GitHub through CLI.
- Removed files with
git rm
, created and merged branches with:
git checkout -b dev
# edit file
git commit -am "change"
git checkout master
git merge dev
- Configured GitHub SSH keys with
ssh-keygen
, pushed withgit push origin master
š Day 4: Boot Process, Operating Modes, Scripting & Services
I learned how to safely shutdown and reboot:
shutdown +5
reboot
I changed system mode using:
systemctl isolate rescue.target
I also learned some other modes like graphical.target
, multi-user.target
and emergency.target
I wrote Bash scripts like:
#!/bin/bash #-> shebang
if [ -f /var/log/syslog ]; then
echo "Syslog exists"
fi
#!/bin/bash
#log date/time of execution
date >> /tmp/script.log
#log Linux kernel version
cat /proc/version >> /tmp/script.log
While learning this, My interest on bash scripting increased a lot and i am planning to go for Bash Scripting as soon as I complete my LFCS journey.
I created a custom systemd service:
[Unit]
Description=My Application
After=network.target auditd.service
[Service]
ExecStartPre=/bin/echo "Systemd is preparing to start MyApp"
ExecStart=/usr/local/bin/myapp.sh
KillMode=process
Restart=always
RestartSec=1
Type=simple
[Install]
WantedBy=multi-user.target
I already knew how to create a service but I know much more options that can be added in the service file itself like - After
, KillMode
, Restart
, RestartSec
, Type
, etc.
Enabled it using:
sudo systemctl enable myscript
sudo systemctl start myscript
Used journalctl -u myscript
to check logs.
š
Day 5: Processes, Logs, Cron, at
, Anacron
I managed processes with:
ps aux
kill -9 1234
nice -n 10 myscript.sh
Checked logs via:
journalctl -xe
journalctl --since "1 hour ago"
I scheduled recurring tasks using cron
:
crontab -e
0 5 * * * /usr/local/bin/backup.sh
I also did some basic exercises with cron and anacron. So, scheduling became easy for me I guess. I donāt know if there are some more advanced topics in cron and anacron but by my learning, it was not that hard for me.
One-time tasks using at
:
echo "reboot" | at now + 1 minute
Persistent jobs using anacron
:
echo "1 5 dailyjob /usr/local/bin/dailytask.sh" >> /etc/anacrontab
Learning about logs was kind of boring for me and a little complex because I had to filter what I need from a huge bulk of data. That was interesting as well as concerning for me that how would I filter what I actually need. Later, when I practiced more and more, still itās not perfect for me but I am getting a hang of it.
š Day 6: Package Managers, PPAs, Source Code, Disk Usage
I explored APT and RPM package managers:
apt install nginx
yum install httpd
dnf install vim
Searched and gathered metadata:
apt search apache
apt show nginx
Configured PPAs:
add-apt-repository ppa:deadsnakes/ppa
apt update
Built from source:
wget http://example.com/pkg.tar.gz
tar -xzf pkg.tar.gz
cd pkg
./configure
make
sudo make install
Verified disk space using:
df -h
du -sh * | sort -h
š Day 7: Writing and Reflecting
Today was about stepping back and looking at how far Iāve come.
I spent most of the time writing this blog, updating my GitHub repo, and preparing my LinkedIn post. While doing that, I revisited the shell scripts I wrote, the services I created, the Git workflows I practiced, and even some of the backup strategies I tried with rsync
and dd
.
What hit me the most was this: Just a week ago, I was learning basic YAML and JSON files, Badic Navigations in Linux, Permissions and Searching for Files in a mediocre way without any grep
commands. Now, Iāve compiled software from source, configured PPAs, scheduled jobs with cron
, at
, and anacron
, and even created systemd services from scratch, learned advanced filtering with grep
. Iāve gone from reading logs to truly analyzing them with journalctl
and less
.
Itās not about memorizing commands anymore. Itās about understanding how things fit togetherāhow a service boots, how a system automates, how a developer pushes and merges code. Thatās what makes this week feel different. It was practical, real, and deeply empowering.
Canāt wait to see what Week 4 has in store. š„
Subscribe to my newsletter
Read articles from Anandhu P A directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Anandhu P A
Anandhu P A
Iām an aspiring DevOps Engineer with a strong interest in infrastructure, automation, and cloud technologies. Currently focused on building my foundational skills in Linux, Git, networking, shell scripting, and containerization with Docker. My goal is to understand how modern software systems are built, deployed, and managed efficiently at scale. Iām committed to growing step by step into a skilled DevOps professional capable of working with CI/CD pipelines, cloud platforms, infrastructure as code, and monitoring tools