š My DevOps Journey ā Week 4: From Kernel Tuning to Network and Storage Mastery

š Day 1: Kernel Tweaks and SELinux Madness Begins
Week 4 began with the kind of deep, under-the-hood Linux stuff that makes you feel like you're really learning systems.
š§ I started with tweaking kernel runtime parameters using sysctl
. It amazed me how many low-level settings you can control.
# Temporarily:
sudo sysctl -w vm.swappiness=20
# Persistently:
echo "vm.swappiness=20" | sudo tee /etc/sysctl.d/swap-tweak.conf
sudo sysctl -p /etc/sysctl.d/swap-tweak.conf
I found the grouping (vm.
, net.
) helpful, and sysctl -a | grep
became my go-to for exploring.
š Then came SELinuxāpowerful, confusing, and huge. Commands like ls -Z
, ps -eZ
, and audit2allow
introduced me to security contexts and policy generation.
sudo audit2allow --all -M mymodule
sudo semodule -i mymodule.pp
What tripped me up was understanding why something was blocked. I stared at logs, wondering: what was the process trying to access? I couldnāt shake the feeling that I had to know every internal link between files and services to make useful policies. It made me wonder if deep-diving SELinux is worth it in DevOps, but something tells me Iāll revisit this later with a stronger foundation.
š Day 2: Containers, VMs, and User Management
š³ I launched my first Nginx container:
docker run -d -p 8080:80 --name mywebserver nginx
Thatās when it clicked: images = blueprint, containers = live. Then I made a custom image:
FROM nginx
COPY index.html /usr/share/nginx/html/index.html
š„ļø I moved to VMs using virsh
and virt-install
āused both cloud images and ISO-based installs. Booting into a console OS install made me feel like a real sysadmin.
š¤ Practicing user creation (adduser
, usermod
, chage
) helped me understand /etc/passwd
and /etc/shadow
. I tweaked default environments using /etc/skel
and edited .bashrc
in /etc/profile.d
to echo login times system-wide. These small touches felt powerful.
š Day 3: Limits, Privileges, Root Access, LDAP Fog, and Network Services
š§ I started setting user resource limits using /etc/security/limits.conf
:
trinity soft nproc 10
trinity hard nproc 20
* soft cpu 5
At first, I didnāt get why only soft was enforcedāturns out PAM decides whether a user can raise their limits.
š Then came sudo
. I added specific command-level permissions in /etc/sudoers
:
trinity ALL=(ALL) /bin/ls, /usr/bin/stat
Also, learned the hard truth: lock root without a sudo-capable user, and you're locked out for good.
š§© LDAP was my biggest struggle. I imported an LXC container, configured libnss-ldapd
, and set up nslcd.conf
, but the server-side user creation was hidden away. I couldnāt understand where users like john and jane were coming from. Eventually, I got that:
NSS tells Linux where to fetch user data.
NSLCD is the LDAP client that talks to the LDAP server.
PAM is what enables home directory creation and soft limit enforcement during login.
It still feels murky, and I know Iāll need to build my own LDAP server soon to clear it all up.
š Networking was hands-on: I learned to manually assign IPs with ip addr
and make them persistent via Netplan:
ethernets:
enp0s8:
addresses:
- 10.0.0.9/24
- fe80::abcd/64
dhcp4: false
nameservers:
addresses: [8.8.8.8, 1.1.1.1]
I also added local DNS entries in /etc/hosts
and tested with simple ping dbserver
.
š°ļø Checked open ports with:
sudo ss -ltunp
Then tied those ports to systemd services:
sudo systemctl stop mariadb
systemctl status ssh
Finally getting comfortable managing the flow between processes, ports, and services.
š Day 4: Bridging, Bonding, Firewalls, NATs, Load Balancers, and Fatigue
š Bridging vs Bonding was a surprisingly interesting concept. Bridging connects multiple interfaces into a virtual switchāgreat for linking two networks. Bonding merges interfaces into one logical unit for higher speed or fault tolerance.
Using Netplan, I configured both bridge (br0
) and bond (bond0
). Bonding mode 1 (active-backup) was my favoriteāit gives redundancy without load balancing complexity.
š„ UFW (Uncomplicated Firewall) made firewall management simple. Allowed only SSH before enabling it:
sudo ufw allow 22
sudo ufw enable
Then, I restricted access by IP and created subnet-wide rulesāused ufw status numbered
to manage and delete rules safely. Blocking a specific IP within a range forced me to reorder rules. Thatās when I realized: firewall rule order really matters.
š¦ NAT & Port Redirection were intense. I edited /etc/sysctl.d/
to enable IP forwarding, then used:
iptables -t nat -A PREROUTING ...
iptables -t nat -A POSTROUTING ...
Masquerading felt magicalāchanging source IPs so return traffic flows through the public server. Saving iptables with iptables-persistent
was essential for reboot survival.
š Nginx as Reverse Proxy & Load Balancer was pure DevOps power. I built proxy.conf
to forward requests to internal servers and later replaced it with lb.conf
using upstream
pools.
upstream mywebservers {
least_conn;
server 1.2.3.4;
server 5.6.7.8;
}
Added weights, backups, and even downed servers during simulation. This made me appreciate why big tech relies on smart routing at the edge.
ā±ļø Time Sync with systemd-timesyncd was straightforward. I listed timezones, set mine, enabled NTP, and verified it was contacting servers:
timedatectl set-timezone Asia/Kolkata
sudo timedatectl set-ntp true
Edited /etc/systemd/timesyncd.conf
to use custom NTP pools. Surprisingly easyābut important for consistent logs across distributed systems.
š”ļø SSH Configuration rounded off the day. I edited /etc/ssh/sshd_config
to change ports, lock root login, and enforce key-based auth:
PermitRootLogin no
PasswordAuthentication no
Added per-user overrides with Match User
. On the client side, I created ~/.ssh/config
for host aliases, then generated and copied SSH keys:
ssh-keygen
ssh-copy-id user@ip
By now, I was exhausted. My brain refused to process sshd_config.d
overrides or X11 settings. I knew I needed restābut the satisfaction of locking down SSH the right way was worth it. Sometimes progress means stepping back too.
Day 4 pushed me mentally. Networking concepts were deep, NAT rules got tangled in my head, and SSH syntax blurred into the terminal. I wanted to push on and ācomplete the course,ā but I had to remind myself: understanding matters more than finishing. Iām documenting everything, and tomorrow is another good day for DevOps.
š Day 5: Disks, LVM, ACLs, and Monitoring Magic
This final day was all about storage, permissions, and performance diagnosticsāand wow, it brought everything together.
š½ I learned to list, create, and manage partitions using lsblk
, fdisk
, and the more friendly cfdisk
.
sudo cfdisk /dev/sdb
š¾ Then came swap managementāboth partition-based and file-based. I created swap partitions with mkswap
+ swapon
, and made swap files using dd
.
sudo dd if=/dev/zero of=/swap bs=1M count=2048
sudo mkswap /swap && sudo swapon /swap
šļø I formatted partitions with XFS and ext4 using mkfs.xfs
, mkfs.ext4
, and labeled them using -L
. I mounted filesystems manually and persistently via /etc/fstab
.
š I explored mount options like noexec
, nosuid
, ro
, and used findmnt
and mount -o remount
to experiment with security flags.
š NFS gave me a way to share directories across machines. I edited /etc/exports
, used exportfs -r
, and mounted remote directories on clients with:
sudo mount 10.0.0.1:/srv/shared /mnt
š§± Then came the magical world of NBD (Network Block Devices). I shared an entire block device over the network and mounted it on a remote machine as if it were local. Using /dev/nbd0
felt powerful.
sudo nbd-client 10.0.0.1 -N partition2
sudo mount /dev/nbd0 /mnt
š§āāļø LVM (Logical Volume Manager) is now one of my favorite tools. It abstracts partitions into PV ā VG ā LV and allows resizing on the fly.
sudo pvcreate /dev/sdc
sudo vgcreate my_volume /dev/sdc
sudo lvcreate --size 2G --name lv1 my_volume
I resized LVs with lvresize --resizefs
, created filesystems on LVs, and mounted them cleanly.
š Finally, I monitored performance using iostat
and pidstat
from the sysstat
package.
iostat
showed TPS, kB/s, and cumulative I/Opidstat -d
helped me find which process was hogging disk I/O
š§¾ Then came ACLs and file attributes. I granted fine-grained permissions with setfacl
, recursively managed directories, and made files immutable
with chattr +i
. These small tools brought huge power and security.
That wraps up the entire LFCS learning curriculum from KodeKloud!
All thatās left now are the 4 official mock exams, which I plan to attempt the next day.
Iām nervous⦠but ready. Iāve documented every day of this journey because I believe in learning in public. The next day will mark the actual completion of this course, and hopefully, the start of my real-world LFCS skills.
š Day 6: Mock Exams, Real Struggles, and Victory š
This day wasnāt about learning new topicsāit was about proving to myself that I had actually learned.
I attempted and passed all 4 official LFCS mock exams by KodeKloud.
But let me be honest: it was way harder than I expected.
š„ Each exam felt like a mini real-world crisis. I had decided I wouldnāt use any external helpāno Googling, no AI tools. Just man pages, help flags, muscle memory, and handwritten notes. That made it all the more brutal... and rewarding.
š I had to scrim through man pages trying different keywords just to find the right option. In some cases, I tried 4-5 related keywords but still couldnāt match what I was looking for. That was frustrating. But I stuck with it. For commands I couldnāt remember, I used double-TAB autocomplete or went through --help
. I think thatās why it took a whole day doing just mere 4 Mock testsāincluding some breaks between, of course.
š My handwritten notes became a lifesaver. I had made them throughout the course to retain what I might forget. During the mock exams, they helped me reconnect with commands and flags I had used but didnāt memorize perfectly.
š§Ŗ Sharing a small story from one of the examsāLFCS Mock 1:
On my first try, I scored 65%. The passing score was 67%. I was sooo close. That really shook me.
But I didnāt give up.
I took a short break, came back, and analyzed my mistakes. The KodeKloud platform gave me feedbackāwhat I did wrong and the expected solution. I didnāt read the whole solution word-for-word (that just felt like copying for me). Instead, I understood the method and tried solving it my own way.
Then I retried the same mock and scored 89%! š That was a proud moment.
š„¹ The highlight of this day was LFCS Mock 3. I passed it on the first try. That felt soooo goodāa mix of joy, surprise, and confidence. It validated that I wasnāt just memorizing; I was actually learning.
š§ What Iāve realized from this day:
You learn the most when you try without help and hit a wall.
Failure is a teacherāmy best scores came after my worst ones.
Documentation is underratedāman pages,
--help
, and notes were my heroes.Handwritten notes arenāt just for reviewāthey're for rescue missions under pressure.
š With that, my LFCS learning journey is complete.
I'm beyond grateful for the amazing course by KodeKloud, the teaching by Jeremy Morgan, and the support from Mumshad Mannambeth (CEO of KodeKloud) for creating this world-class platform. š
To anyone considering this course: itās not easy. But itās worth every single terminal command.
š Day 7: Final Touch ā Docs, Git, and Plans Ahead
š After all the hands-on grind and brain-twisting mock exams, this day was a bit more peacefulābut equally satisfying. It was the day I sat down, reflected, documented, and pushed everything to GitHub.
I cleaned up my handwritten notes, wrote summary .md
files for each topic I covered, merged related ones to keep it organized, and updated the README.md
for Week 4. It felt like putting the last page into a well-worn notebook. Every command I struggled to remember, every confusion I hadāit's all there now, versioned and backed up.
š My GitHub repo is now a complete journal of this LFCS prep journey. Week by week, topic by topic, every .md
file tells a part of the story. Iāve linked this repo in my blog and LinkedIn posts because I want it to live beyond just prepāIāll be referring to it when I start applying this knowledge in real-world projects.
āļø Next week, I'm planning to go deep into Bash scriptingānot just loops and conditionals, but really mastering shell automation, traps, signals, functions, and error handling. If time allows, Iād love to start learning Python tooāespecially with DevOps automation in mind.
The learning continues! šØāš»
Subscribe to my newsletter
Read articles from Anandhu P A directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Anandhu P A
Anandhu P A
Iām an aspiring DevOps Engineer with a strong interest in infrastructure, automation, and cloud technologies. Currently focused on building my foundational skills in Linux, Git, networking, shell scripting, and containerization with Docker. My goal is to understand how modern software systems are built, deployed, and managed efficiently at scale. Iām committed to growing step by step into a skilled DevOps professional capable of working with CI/CD pipelines, cloud platforms, infrastructure as code, and monitoring tools