šŸš€ My DevOps Journey — Week 4: From Kernel Tuning to Network and Storage Mastery

Anandhu P AAnandhu P A
10 min read

šŸ“… Day 1: Kernel Tweaks and SELinux Madness Begins

Week 4 began with the kind of deep, under-the-hood Linux stuff that makes you feel like you're really learning systems.

🧠 I started with tweaking kernel runtime parameters using sysctl. It amazed me how many low-level settings you can control.

# Temporarily:
sudo sysctl -w vm.swappiness=20

# Persistently:
echo "vm.swappiness=20" | sudo tee /etc/sysctl.d/swap-tweak.conf
sudo sysctl -p /etc/sysctl.d/swap-tweak.conf

I found the grouping (vm., net.) helpful, and sysctl -a | grep became my go-to for exploring.

šŸ” Then came SELinux—powerful, confusing, and huge. Commands like ls -Z, ps -eZ, and audit2allow introduced me to security contexts and policy generation.

sudo audit2allow --all -M mymodule
sudo semodule -i mymodule.pp

What tripped me up was understanding why something was blocked. I stared at logs, wondering: what was the process trying to access? I couldn’t shake the feeling that I had to know every internal link between files and services to make useful policies. It made me wonder if deep-diving SELinux is worth it in DevOps, but something tells me I’ll revisit this later with a stronger foundation.

šŸ“… Day 2: Containers, VMs, and User Management

🐳 I launched my first Nginx container:

docker run -d -p 8080:80 --name mywebserver nginx

That’s when it clicked: images = blueprint, containers = live. Then I made a custom image:

FROM nginx
COPY index.html /usr/share/nginx/html/index.html

šŸ–„ļø I moved to VMs using virsh and virt-install—used both cloud images and ISO-based installs. Booting into a console OS install made me feel like a real sysadmin.

šŸ‘¤ Practicing user creation (adduser, usermod, chage) helped me understand /etc/passwd and /etc/shadow. I tweaked default environments using /etc/skel and edited .bashrc in /etc/profile.d to echo login times system-wide. These small touches felt powerful.

šŸ“… Day 3: Limits, Privileges, Root Access, LDAP Fog, and Network Services

🧠 I started setting user resource limits using /etc/security/limits.conf:

trinity   soft   nproc   10
trinity   hard   nproc   20
*         soft   cpu     5

At first, I didn’t get why only soft was enforced—turns out PAM decides whether a user can raise their limits.

šŸ” Then came sudo. I added specific command-level permissions in /etc/sudoers:

trinity ALL=(ALL) /bin/ls, /usr/bin/stat

Also, learned the hard truth: lock root without a sudo-capable user, and you're locked out for good.

🧩 LDAP was my biggest struggle. I imported an LXC container, configured libnss-ldapd, and set up nslcd.conf, but the server-side user creation was hidden away. I couldn’t understand where users like john and jane were coming from. Eventually, I got that:

  • NSS tells Linux where to fetch user data.

  • NSLCD is the LDAP client that talks to the LDAP server.

  • PAM is what enables home directory creation and soft limit enforcement during login.

It still feels murky, and I know I’ll need to build my own LDAP server soon to clear it all up.

🌐 Networking was hands-on: I learned to manually assign IPs with ip addr and make them persistent via Netplan:

ethernets:
  enp0s8:
    addresses:
      - 10.0.0.9/24
      - fe80::abcd/64
    dhcp4: false
    nameservers:
      addresses: [8.8.8.8, 1.1.1.1]

I also added local DNS entries in /etc/hosts and tested with simple ping dbserver.

šŸ›°ļø Checked open ports with:

sudo ss -ltunp

Then tied those ports to systemd services:

sudo systemctl stop mariadb
systemctl status ssh

Finally getting comfortable managing the flow between processes, ports, and services.

šŸ“… Day 4: Bridging, Bonding, Firewalls, NATs, Load Balancers, and Fatigue

šŸ”— Bridging vs Bonding was a surprisingly interesting concept. Bridging connects multiple interfaces into a virtual switch—great for linking two networks. Bonding merges interfaces into one logical unit for higher speed or fault tolerance.

Using Netplan, I configured both bridge (br0) and bond (bond0). Bonding mode 1 (active-backup) was my favorite—it gives redundancy without load balancing complexity.

šŸ”„ UFW (Uncomplicated Firewall) made firewall management simple. Allowed only SSH before enabling it:

sudo ufw allow 22
sudo ufw enable

Then, I restricted access by IP and created subnet-wide rules—used ufw status numbered to manage and delete rules safely. Blocking a specific IP within a range forced me to reorder rules. That’s when I realized: firewall rule order really matters.

šŸ“¦ NAT & Port Redirection were intense. I edited /etc/sysctl.d/ to enable IP forwarding, then used:

iptables -t nat -A PREROUTING ...
iptables -t nat -A POSTROUTING ...

Masquerading felt magical—changing source IPs so return traffic flows through the public server. Saving iptables with iptables-persistent was essential for reboot survival.

🌐 Nginx as Reverse Proxy & Load Balancer was pure DevOps power. I built proxy.conf to forward requests to internal servers and later replaced it with lb.conf using upstream pools.

upstream mywebservers {
    least_conn;
    server 1.2.3.4;
    server 5.6.7.8;
}

Added weights, backups, and even downed servers during simulation. This made me appreciate why big tech relies on smart routing at the edge.

ā±ļø Time Sync with systemd-timesyncd was straightforward. I listed timezones, set mine, enabled NTP, and verified it was contacting servers:

timedatectl set-timezone Asia/Kolkata
sudo timedatectl set-ntp true

Edited /etc/systemd/timesyncd.conf to use custom NTP pools. Surprisingly easy—but important for consistent logs across distributed systems.

šŸ›”ļø SSH Configuration rounded off the day. I edited /etc/ssh/sshd_config to change ports, lock root login, and enforce key-based auth:

PermitRootLogin no
PasswordAuthentication no

Added per-user overrides with Match User. On the client side, I created ~/.ssh/config for host aliases, then generated and copied SSH keys:

ssh-keygen
ssh-copy-id user@ip

By now, I was exhausted. My brain refused to process sshd_config.d overrides or X11 settings. I knew I needed rest—but the satisfaction of locking down SSH the right way was worth it. Sometimes progress means stepping back too.

Day 4 pushed me mentally. Networking concepts were deep, NAT rules got tangled in my head, and SSH syntax blurred into the terminal. I wanted to push on and ā€œcomplete the course,ā€ but I had to remind myself: understanding matters more than finishing. I’m documenting everything, and tomorrow is another good day for DevOps.

šŸ“… Day 5: Disks, LVM, ACLs, and Monitoring Magic

This final day was all about storage, permissions, and performance diagnostics—and wow, it brought everything together.

šŸ’½ I learned to list, create, and manage partitions using lsblk, fdisk, and the more friendly cfdisk.

sudo cfdisk /dev/sdb

šŸ’¾ Then came swap management—both partition-based and file-based. I created swap partitions with mkswap + swapon, and made swap files using dd.

sudo dd if=/dev/zero of=/swap bs=1M count=2048
sudo mkswap /swap && sudo swapon /swap

šŸ—‚ļø I formatted partitions with XFS and ext4 using mkfs.xfs, mkfs.ext4, and labeled them using -L. I mounted filesystems manually and persistently via /etc/fstab.

šŸ” I explored mount options like noexec, nosuid, ro, and used findmnt and mount -o remount to experiment with security flags.

🌐 NFS gave me a way to share directories across machines. I edited /etc/exports, used exportfs -r, and mounted remote directories on clients with:

sudo mount 10.0.0.1:/srv/shared /mnt

🧱 Then came the magical world of NBD (Network Block Devices). I shared an entire block device over the network and mounted it on a remote machine as if it were local. Using /dev/nbd0 felt powerful.

sudo nbd-client 10.0.0.1 -N partition2
sudo mount /dev/nbd0 /mnt

šŸ§™ā€ā™‚ļø LVM (Logical Volume Manager) is now one of my favorite tools. It abstracts partitions into PV āž VG āž LV and allows resizing on the fly.

sudo pvcreate /dev/sdc
sudo vgcreate my_volume /dev/sdc
sudo lvcreate --size 2G --name lv1 my_volume

I resized LVs with lvresize --resizefs, created filesystems on LVs, and mounted them cleanly.

šŸ“Š Finally, I monitored performance using iostat and pidstat from the sysstat package.

  • iostat showed TPS, kB/s, and cumulative I/O

  • pidstat -d helped me find which process was hogging disk I/O

🧾 Then came ACLs and file attributes. I granted fine-grained permissions with setfacl, recursively managed directories, and made files immutable with chattr +i. These small tools brought huge power and security.

That wraps up the entire LFCS learning curriculum from KodeKloud!

All that’s left now are the 4 official mock exams, which I plan to attempt the next day.

I’m nervous… but ready. I’ve documented every day of this journey because I believe in learning in public. The next day will mark the actual completion of this course, and hopefully, the start of my real-world LFCS skills.

šŸ“… Day 6: Mock Exams, Real Struggles, and Victory šŸŽ“

This day wasn’t about learning new topics—it was about proving to myself that I had actually learned.

I attempted and passed all 4 official LFCS mock exams by KodeKloud.

But let me be honest: it was way harder than I expected.

šŸ’„ Each exam felt like a mini real-world crisis. I had decided I wouldn’t use any external help—no Googling, no AI tools. Just man pages, help flags, muscle memory, and handwritten notes. That made it all the more brutal... and rewarding.

šŸ“– I had to scrim through man pages trying different keywords just to find the right option. In some cases, I tried 4-5 related keywords but still couldn’t match what I was looking for. That was frustrating. But I stuck with it. For commands I couldn’t remember, I used double-TAB autocomplete or went through --help. I think that’s why it took a whole day doing just mere 4 Mock tests—including some breaks between, of course.

šŸ“ My handwritten notes became a lifesaver. I had made them throughout the course to retain what I might forget. During the mock exams, they helped me reconnect with commands and flags I had used but didn’t memorize perfectly.

🧪 Sharing a small story from one of the exams—LFCS Mock 1:

On my first try, I scored 65%. The passing score was 67%. I was sooo close. That really shook me.

But I didn’t give up.

I took a short break, came back, and analyzed my mistakes. The KodeKloud platform gave me feedback—what I did wrong and the expected solution. I didn’t read the whole solution word-for-word (that just felt like copying for me). Instead, I understood the method and tried solving it my own way.

Then I retried the same mock and scored 89%! šŸ™Œ That was a proud moment.

🄹 The highlight of this day was LFCS Mock 3. I passed it on the first try. That felt soooo good—a mix of joy, surprise, and confidence. It validated that I wasn’t just memorizing; I was actually learning.

🧠 What I’ve realized from this day:

  • You learn the most when you try without help and hit a wall.

  • Failure is a teacher—my best scores came after my worst ones.

  • Documentation is underrated—man pages, --help, and notes were my heroes.

  • Handwritten notes aren’t just for review—they're for rescue missions under pressure.

šŸŽ‰ With that, my LFCS learning journey is complete.

I'm beyond grateful for the amazing course by KodeKloud, the teaching by Jeremy Morgan, and the support from Mumshad Mannambeth (CEO of KodeKloud) for creating this world-class platform. šŸ™

To anyone considering this course: it’s not easy. But it’s worth every single terminal command.

šŸ“… Day 7: Final Touch — Docs, Git, and Plans Ahead

šŸ“ After all the hands-on grind and brain-twisting mock exams, this day was a bit more peaceful—but equally satisfying. It was the day I sat down, reflected, documented, and pushed everything to GitHub.

I cleaned up my handwritten notes, wrote summary .md files for each topic I covered, merged related ones to keep it organized, and updated the README.md for Week 4. It felt like putting the last page into a well-worn notebook. Every command I struggled to remember, every confusion I had—it's all there now, versioned and backed up.

šŸ“ My GitHub repo is now a complete journal of this LFCS prep journey. Week by week, topic by topic, every .md file tells a part of the story. I’ve linked this repo in my blog and LinkedIn posts because I want it to live beyond just prep—I’ll be referring to it when I start applying this knowledge in real-world projects.

ā­ļø Next week, I'm planning to go deep into Bash scripting—not just loops and conditionals, but really mastering shell automation, traps, signals, functions, and error handling. If time allows, I’d love to start learning Python too—especially with DevOps automation in mind.

The learning continues! šŸ‘Øā€šŸ’»

0
Subscribe to my newsletter

Read articles from Anandhu P A directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Anandhu P A
Anandhu P A

I’m an aspiring DevOps Engineer with a strong interest in infrastructure, automation, and cloud technologies. Currently focused on building my foundational skills in Linux, Git, networking, shell scripting, and containerization with Docker. My goal is to understand how modern software systems are built, deployed, and managed efficiently at scale. I’m committed to growing step by step into a skilled DevOps professional capable of working with CI/CD pipelines, cloud platforms, infrastructure as code, and monitoring tools