🚀 My DevOps Journey — Week 9: AWS Hands-On with Networking, Storage, Compute, Databases & App Integration

Anandhu P AAnandhu P A
10 min read

After a well-earned chill in Week 8, I returned this week with a fresh mind and a cloud-heavy focus. Week 9 kicked off my deep dive into AWS hands-on usage — exploring different interfaces (Console, CLI, SDK), diving into networking, and wrapping my head around storage in all its blocky, object-y glory. I didn’t expect the networking part to hit this hard, but it did. And just when I thought I’d earned a breather, storage theory showed up and said: “Buckle up, it’s going to be boring.”


🗓️ Day 1 — Buckets, Bash & JavaScript in the Cloud

I started with the basics: how to interact with AWS. There are three main ways — Console (GUI), CLI (terminal), and SDKs (code). I made sure to get a little practice with each.

🔹 The Console was intuitive. It reminded me how easy AWS makes things look — switching regions, pinning services, viewing dashboards. But that ease hides a lot of complexity underneath.

🔹 The CLI was where things got interesting. I installed it, ran aws configure, and started issuing real commands:

aws s3 mb s3://my-anandhu-cli-bucket
aws s3 ls
aws s3 cp notes.txt s3://my-anandhu-cli-bucket/
aws s3 rb s3://my-anandhu-cli-bucket --force

Simple, powerful — but also prone to errors if you're not careful with regions or flags. I accidentally skipped the region flag once and got a confusing error. Lesson learned: the CLI doesn’t forgive ambiguity.

🔹 The SDK part brought me back to JavaScript. I used the @aws-sdk/client-s3 package to create a script that lists S3 buckets. Setting up credentials using a default profile, writing async functions, and seeing my buckets appear in the console felt like magic — even though it was just an API call. I kept it minimal, but the potential here is insane. This was the first time AWS felt “programmable.”

🗺️ Then came the Global Infrastructure module, and it was a reminder that AWS isn’t just one cloud — it’s dozens of them, globally distributed:

  • Regions = Separate geographical areas

  • Availability Zones = Isolated data centers in those regions

  • Edge Locations = CDN points for caching content

  • Local Zones = Bring compute closer to urban areas

At first, all these names blurred together — but visualizing how requests flow from an edge location to an AZ made it click.

🧠 I ended the day diving into networking fundamentals in AWS. And honestly, this part fried my brain a bit.

🧱 VPCs, Subnets, Gateways, and Firewalls

I’ve always heard "start with VPCs," but I didn’t realize how much there was under that one acronym. Today was about building a mental model of how AWS organizes its networks — and it was intense.

  • A VPC (Virtual Private Cloud) is your own isolated network within AWS.

  • You divide it into subnets — public and private.

  • Attach an Internet Gateway to let public subnets talk to the internet.

  • Use a NAT Gateway so private subnets can go outbound without being exposed.

The NAT vs Internet Gateway thing took me a while to grasp — mostly because I kept wondering “why not just give everything public access?” And then I remembered... security.

Speaking of security — AWS uses two kinds of firewalls:

  • Network ACLs: Subnet-level, stateless (you have to allow traffic both ways)

  • Security Groups: Resource-level, stateful (allow inbound → outbound is automatic)

At first, I kept mixing up their behavior, especially because default rules can silently block you. It felt like playing with Lego while blindfolded — fun but slightly dangerous. That said, the moment I finally mapped out:

  • VPC

  • Public + Private Subnets

  • Internet Gateway + NAT Gateway

  • Route Tables

  • Security Groups vs NACLs

…was the moment the cloud felt like a real infrastructure platform, not just a fancy web host.


🗓️ Day 2 — Storage Theory & Hands-On EBS, EFS, S3

I thought storage would be a relaxing follow-up to networking. I was wrong. The theory part was easy to understand, but really really really really boring. Like, “read three pages and immediately want a nap” boring. But I stuck through it, and it paid off once I hit the practical bits.

🧱 Block vs 📁 File vs 📦 Object

  • EBS (Elastic Block Store) — attachable volumes for EC2; bootable and mountable

  • EFS (Elastic File System) — shared file system; mountable across multiple EC2s

  • S3 (Simple Storage Service) — flat namespace, object-based; not bootable or mountable

Once I got past the definitions, I dove into S3 storage classes, and wow — there are more tiers than I expected:

  • S3 Standard

  • Standard-IA

  • One Zone-IA

  • Glacier Instant

  • Glacier Flexible

  • Glacier Deep Archive

  • Intelligent Tiering

Each with its own tradeoffs on cost, durability, retrieval speed, and redundancy. I made a decision tree to understand when to use which.

🧪 Hands-On: EBS Demo

I created a volume, attached it to an EC2 instance, formatted it, mounted it, persisted it with fstab, and even reattached it to another instance to test if the data survived. It did. That “hello from instance 1” file was still there when I mounted it on instance 2. Simple. Solid.

🔁 Hands-On: EFS Demo

The fun part. I mounted the same EFS volume on two EC2 instances and tested shared access. Writing a file on one and reading it from the other? That felt cool. It reminded me of NFS from Linux, but this time managed by AWS. The amazon-efs-utils package did most of the work, and now I know how to persist it with /etc/fstab too.

📦 Hands-On: S3 Bucket Creation & Organization

I created a new S3 bucket via the Console — unique global name, default settings, and tested uploading files. I dragged in images, viewed properties, and tried opening them via public URL (Access Denied, as expected). I then organized my bucket with folder prefixes, moved files, and deleted some to test the lifecycle.

Even though S3 is a flat namespace, it felt like a folder-based system because of how AWS fakes the structure with keys. That design was subtle, but it made me rethink how object storage actually works under the hood.


🗓️ Day 3 — EC2, Lambda & Containers (ECS vs EKS)

Just when I thought the boring part was behind me, Compute services said “Wait your turn.” Thankfully, Day 3 brought a mix of deep theory and some of the most satisfying hands-on demos I’ve done so far in this AWS journey.

☁️ EC2: The OG AWS Compute Service

We started with EC2 (Elastic Compute Cloud). It’s the classic — your virtual machine in the cloud. Compared to traditional bare-metal provisioning (rack server, OS install, patching, all that drama), EC2 felt magical. You pick an AMI (pre-configured image), choose instance type, configure networking + storage + SSH access, and boom — your server is live in minutes.

The demo covered everything:

  • Launching an EC2 instance

  • Connecting via SSH using PEM key

  • Configuring inbound rules with a Security Group

  • Exploring monitoring tabs, networking, storage, and instance metadata

  • Finally, cleanly terminating the instance to avoid charges

It was smooth, but also a good reminder: EC2 may look simple, but there's a lot going on behind that dashboard.

Lambda: Code Without the Server Drama

Next, we dove into AWS Lambda — the star of serverless.

You just write the function, upload it, and let AWS worry about the rest — provisioning, scaling, even idle-state costs. That’s when it hit me: Lambda is perfect for quick tasks, background jobs, or event-driven logic. The catch? It’s stateless, has a 15-minute timeout, and cold starts can delay response time (but hey, there’s SnapStart and Provisioned Concurrency for that).

I wrote my first Lambda in Node.js, tested it with a custom event, and then...

✅ Integrated it with API Gateway — boom, now it’s callable via HTTP
✅ Pulled in a third-party package (bcryptjs), zipped node_modules, uploaded the ZIP
✅ Later, created a Lambda Layer so I don’t have to zip deps every time

The moment I saw it hash a password and return the response — that’s when Lambda clicked for me.

📦 Containers: ECS vs EKS

Finally, we entered container territory. This was heavy, but extremely important.

ECS (Elastic Container Service) — AWS’s native container orchestrator. Super tightly integrated with other AWS services. Easy to get started, especially with Fargate (no need to manage servers).

EKS (Elastic Kubernetes Service) — Managed Kubernetes. More complex, more flexible, more portable. You get all the power of Kubernetes without the headache of managing the control plane.

📌 Key differences I noted:

  • ECS is simpler, but vendor-locked.

  • EKS is portable (open source K8s), but steeper learning curve.

  • ECS has no control plane cost, while EKS charges for it.

  • EKS is better for teams with K8s experience or plans for hybrid/multi-cloud.

Even though we didn’t deploy a full ECS/EKS app yet, the theoretical clarity helped me visualize how containers are orchestrated at scale.

🗓️ Day 4 — Databases & App-to-App Conversations

I kicked off Day 4 thinking databases would be straightforward. Spoiler: the theory was long, but the demos made it all click.

🗃️ SQL vs NoSQL vs Self-Managed

AWS gives you three broad options:

  • Self-Managed: Total control, like running PostgreSQL on EC2 — but you manage updates, backups, everything.

  • SQL Services: Like RDS, Aurora, and Redshift — great for structured data and relationships.

  • NoSQL Services: Like DynamoDB or DocumentDB — ideal for key-value lookups or document data.

A fun analogy compared SQL to buses with fixed seating (structured tables) and NoSQL to a giant open bus where you search by name (flexible documents).

🔍 Quick Breakdown:

  • RDS: Managed MySQL/PostgreSQL/etc., perfect for apps like e-commerce.

  • Aurora & Aurora Serverless v2: Faster, scalable, cloud-native SQL — Serverless even scales automatically.

  • Redshift: Made for analytics and big reporting queries.

  • DynamoDB: Lightning-fast NoSQL with key-based access.

  • DocumentDB: MongoDB-compatible, ideal for nested documents.

  • Others: Neptune (graph), QLDB (ledger), ElastiCache (in-memory), Timestream (time-series), OpenSearch (search), Keyspaces (Cassandra).

I launched:

  • a PostgreSQL DB (free tier)

  • an Aurora PostgreSQL cluster

  • and an Aurora Serverless DB that scaled from 1 to 4 ACUs

It was cool to see how easily you could spin up full database systems with auto-scaling, backups, and more — all without touching a terminal.

📩 App Integration: Getting Services to Talk

The second half of the day was about connecting services — think apps talking to each other through AWS pipes.

🔔 SNS (Simple Notification Service)
Acts like a loudspeaker. You publish one message → it goes to multiple subscribers (email, SMS, or queues).

📬 SQS (Simple Queue Service)
Works as a message buffer. One service sends a message, and another pulls it later. Super helpful to handle spikes without crashing.

🧪 Demo: I created an SNS topic, added both email + SQS as subscribers, and published a test message. Both received it. Magic.

I also updated SQS’s access policy to allow messages from SNS — a small step but crucial for actual delivery.

🧠 Extras I Learned About:

  • ELB (Load Balancer) — Spreads traffic across servers.

  • Auto Scaling — Adds/removes servers based on demand.

  • AppFlow / EventBridge / Step Functions — For workflows, third-party sync, and complex event chains.

All these tools help make apps scalable, decoupled, and resilient — the core of cloud-native thinking.

🗓️ Day 5 — Pushing, Publishing & Planning Ahead

Today was less about code and more about content. I spent the day documenting everything I had done over the past four days — organizing files, cleaning up scripts, and pushing them to my GitHub repo. On top of that, I wrote this very blog you're reading now. The act of looking back, summarizing, and explaining things in my own words helped reinforce a lot more than I expected.

I also spent some time reflecting on what’s ahead.

🔮 What’s Coming Next?

Next week will be a slower one — I’ve got an interview scheduled, which means some time will go into prep and practice. I’ve also enrolled in the DevOps Shack pre-recorded course, which I plan to officially start next week.

The beginning of that course covers Linux and Git, two areas I’ve already spent time on. I’m not expecting brand-new concepts there, but I do hope to deepen my knowledge or reinforce some advanced edge cases I might’ve missed. Or who knows — maybe it’ll just be revision. Either way, I’ll watch the videos fully.

📚 Other plans:

✅ I’ll finish the AWS Cloud Practitioner course by next week (finally).
🔧 I might also start learning about build tools (like Maven/Gradle), which comes up early in the DevOps Shack curriculum.

So while the learning pace may dip, the commitment stays strong. Catch you in Week 10!

0
Subscribe to my newsletter

Read articles from Anandhu P A directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Anandhu P A
Anandhu P A

I’m an aspiring DevOps Engineer with a strong interest in infrastructure, automation, and cloud technologies. Currently focused on building my foundational skills in Linux, Git, networking, shell scripting, and containerization with Docker. My goal is to understand how modern software systems are built, deployed, and managed efficiently at scale. I’m committed to growing step by step into a skilled DevOps professional capable of working with CI/CD pipelines, cloud platforms, infrastructure as code, and monitoring tools