100-Day DevOps Journey !!!

Amit ThakurAmit Thakur
5 min read

Docker + Kubernetes Foundation: Day 1 of My 100-Day DevOps Journey

Welcome to my DevOps learning journey! After 2 months of silent learning, I'm finally documenting my progress publicly. Today marks Day 1 of my 100-day challenge where I'll share real learnings, struggles, and wins every Monday and Wednesday.

Why This Journey Matters

The containerization landscape has evolved rapidly. Docker and Kubernetes aren't just buzzwords anymore - they're essential skills for modern application deployment. Today, I revisited these fundamentals to build a rock-solid foundation before diving into Jenkins and the complete CI/CD pipeline.

Docker Deep Dive: Production-Ready Skills

Multi-Stage Builds: Game Changer for Production

One of the most powerful Docker features I revisited today. Here's a practical example with a Node.js TypeScript app:

# Build stage
FROM node:18-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci --only=production

COPY . .
RUN npm run build

# Production stage
FROM node:18-alpine AS production
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package*.json ./

EXPOSE 3000
CMD ["node", "dist/index.js"]

Why this matters: Reduced final image size by 60% compared to single-stage builds. In production, smaller images mean faster deployments and reduced storage costs.

Docker Networking: Beyond the Basics

Created custom networks for better container isolation:

# Create custom network
docker network create --driver bridge my-app-network

# Run containers on custom network
docker run -d --name api --network my-app-network my-api:latest
docker run -d --name db --network my-app-network postgres:13

Key insight: Custom networks provide automatic DNS resolution between containers. The API container can reach the database using db:5432 instead of IP addresses.

Volume Management: Persistent Data Done Right

# Named volumes for database persistence
docker volume create postgres-data
docker run -d --name db -v postgres-data:/var/lib/postgresql/data postgres:13

# Bind mounts for development
docker run -d -v $(pwd)/src:/app/src my-dev-container

Kubernetes: Orchestration in Action

The Self-Healing Revelation

Today's biggest "aha" moment was watching Kubernetes self-healing work in real-time:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.21
        ports:
        - containerPort: 80

When I deliberately killed one pod with kubectl delete pod <pod-name>, Kubernetes immediately spun up a replacement. The service continued running without interruption. This is why orchestration is crucial for production workloads.

Scaling Applications: Horizontal Pod Autoscaler

# Enable metrics server first
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

# Create HPA
kubectl autoscale deployment nginx-deployment --cpu-percent=70 --min=2 --max=10

# Watch scaling in action
kubectl get hpa -w

Real-world application: Under load, pods automatically scale up. When demand decreases, they scale down. No manual intervention needed.

Multi-Container Applications: Sidecar Pattern

Deployed a webapp with nginx sidecar for reverse proxy:

apiVersion: v1
kind: Pod
metadata:
  name: webapp-with-proxy
spec:
  containers:
  - name: webapp
    image: my-webapp:latest
    ports:
    - containerPort: 3000
  - name: nginx-proxy
    image: nginx:alpine
    ports:
    - containerPort: 80
    volumeMounts:
    - name: nginx-config
      mountPath: /etc/nginx/conf.d
  volumes:
  - name: nginx-config
    configMap:
      name: nginx-config

ConfigMaps & Secrets: Configuration Management

apiVersion: v1
kind: ConfigMap
metadata:
  name: app-config
data:
  database_url: "postgresql://db:5432/myapp"
  log_level: "info"

---
apiVersion: v1
kind: Secret
metadata:
  name: app-secrets
type: Opaque
data:
  db_password: <base64-encoded-password>

Best practice learned: Never hardcode configuration in container images. Use ConfigMaps for non-sensitive data and Secrets for passwords/tokens.

Persistent Volumes: Where I Struggled

Understanding the difference between PersistentVolume (PV) and PersistentVolumeClaim (PVC) took multiple attempts:

# PersistentVolume - cluster resource
apiVersion: v1
kind: PersistentVolume
metadata:
  name: postgres-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /data/postgres

---
# PersistentVolumeClaim - namespace resource
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Mental model that helped: Think of PV as the actual storage (like a hard drive) and PVC as a request for that storage (like a storage requisition form).

Docker Compose: Local Development Magic

For complex applications, Docker Compose simplifies everything:

version: '3.8'
services:
  web:
    build: .
    ports:
      - "3000:3000"
    depends_on:
      - db
      - redis
    environment:
      - NODE_ENV=development
    volumes:
      - .:/app
      - /app/node_modules

  db:
    image: postgres:13
    environment:
      POSTGRES_DB: myapp
      POSTGRES_PASSWORD: password
    volumes:
      - postgres_data:/var/lib/postgresql/data
    ports:
      - "5432:5432"

  redis:
    image: redis:alpine
    ports:
      - "6379:6379"

volumes:
  postgres_data:

Single command: docker-compose up -d and your entire development environment is running.

Real-World Project: End-to-End Deployment

Combined everything learned today:

  1. Containerized a Node.js app with multi-stage Dockerfile

  2. Pushed optimized image to Docker Hub

  3. Deployed to Kubernetes cluster

  4. Configured horizontal pod autoscaling

  5. Set up persistent storage for user uploads

  6. Implemented rolling updates with zero downtime

The entire pipeline works seamlessly from development to production.

Tomorrow's Focus: Jenkins Integration

Now that the container foundation is solid, tomorrow I'm diving into Jenkins. The goal is to create a complete CI/CD pipeline:

Docker (containerization) โ†’ Kubernetes (orchestration) โ†’ Jenkins (automation)

This will complete the DevOps trifecta for modern application deployment.

Key Takeaways

  1. Multi-stage builds are essential for production Docker images

  2. Custom networks provide better container communication

  3. Kubernetes self-healing is truly impressive in practice

  4. ConfigMaps and Secrets keep configuration out of code

  5. Persistent Volumes need more hands-on practice (my weakness!)

What's Next?

  • Monday: Jenkins fundamentals and pipeline creation

  • Wednesday: Integrating Jenkins with Docker and Kubernetes

  • GitHub: All code examples available in my repository

  • Accountability: Tracking every learning hour for transparency

Connect & Follow

This is just Day 1 of 50 posts over 100 days. If you're on a similar journey or have experience with these technologies, I'd love to connect!

Questions for the community:

  • What was your biggest challenge when learning Kubernetes?

  • Any Jenkins tips for a beginner?

  • Which Docker optimization techniques do you swear by?

Drop your thoughts in the comments - let's learn together! ๐Ÿš€


Following my progress? Every Monday and Wednesday, I'm sharing detailed learnings, code examples, and real struggles. No fluff, just practical DevOps knowledge gained through hands-on practice.

1
Subscribe to my newsletter

Read articles from Amit Thakur directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Amit Thakur
Amit Thakur