Day 7 - Kubernetes Without The Tears

TJ GokkenTJ Gokken
4 min read

We’ve done the deep dives, spun up pods, set configs, scaled deployments, and opened the front doors with services. Now it’s time to pull everything together into a mini project you can actually use - or at least show off proudly. It even makes a great talk for your local User Group.

Something simple. Something clear.

The Project: A Simple App Stack

We’re going to deploy a tiny three-tier web app:

  • Frontend (nginx serving a static HTML page)

  • Backend API (dummy Node.js app returning JSON)

  • Redis (caching layer, but makes us feel super fancy - and also good to wow your audience during those user group talks)

Everything will:

  • Use deployments with probes

  • Be organized with labels

  • Be grouped with namespaces

  • Use services for communication

  • Have config managed via ConfigMaps and Secrets

It may sound complicated but we actually have gone through all of these individually in the past 6 days. We are just bringing them together now.

Architecture Overview

Before we start, let’s visualize how this is all going to fit together using a Mermaid diagram:

flowchart LR
    Frontend([Frontend - nginx])
    Backend([Backend - echo-server])
    Redis([Redis - Redis cache])

    Frontend -->|calls| Backend
    Backend -->|talks to| Redis

    Frontend -.->|service| User[(Browser)]
    Backend -.->|service| BackendService[(Backend Service)]
    Redis -.->|service| RedisService[(Redis Service)]

Well, then, let’s go.

Developing the Solution

Step 1: Create a new namespace

kubectl create namespace demo-stack

Step 2: Deploy the Redis pod

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis
  namespace: demo-stack
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      containers:
      - name: redis
        image: redis:latest
kubectl apply -f redis-deployment.yaml

And expose it:

kubectl expose deployment redis --port=6379 -n demo-stack

Step 3: Deploy the backend

Use a dummy image like ealen/echo-server

apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend
  namespace: demo-stack
spec:
  replicas: 1
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend
        image: ealen/echo-server
        ports:
        - containerPort: 80
kubectl apply -f backend-deployment.yaml
kubectl expose deployment backend --port=80 -n demo-stack

Step 4: Deploy the frontend

Use nginx + a ConfigMap with an index.html

kubectl create configmap frontend-html --from-file=index.html -n demo-stack
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  namespace: demo-stack
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        volumeMounts:
        - name: html
          mountPath: /usr/share/nginx/html
      volumes:
      - name: html
        configMap:
          name: frontend-html
kubectl apply -f frontend-deployment.yaml
kubectl expose deployment frontend --type=NodePort --port=80 -n demo-stack

Step 5: Access the App

Get the frontend NodePort:

kubectl get service frontend -n demo-stack

Open it in a browser:

http://localhost:<your-port>

Ta-Da: High Five - You Did It!

This is a screenshot of what you will see in your browser:

Congratulations — you did it.

7 Days ago, Kubernetes was a just a word we saw in memes, and maybe in some LinkedIn posts. Today, you just built a living, breathing Kubernetes app stack from scratch.

We started out with pods and containers, scratching our heads over why everything had so many moving parts. Then we added deployments, labels, configmaps, secrets, services, autoscaling... and somehow it all started to make sense.

You’ve seen how Kubernetes keeps things running even when individual parts break. You’ve seen how apps talk to each other without caring where they physically live. You’ve seen how scaling happens automatically when you need it, not when you beg it. We even witnessed firsthand why using Linux sometimes just makes life easier (looking at you, windows 👀). I’m not a Linux zealot either — just a pragmatist. best tool for the job, always.

Is this DevOps?

Yeah, you could say that. But honestly, it’s just good engineering.

As developers, we don’t have to become Kubernetes admins or experts overnight. But knowing how our apps actually live and breathe inside a cluster? That makes you a better developer — plain and simple.

You’re no longer just writing code and throwing it over the wall, hoping someone else figures out deployment, scaling, health checks, and resilience.

Now you understand it. Now you can build better apps that belong in production. Apps that behave nicely when the going gets rough.

And even if you don't work directly with Kubernetes every day, just having this knowledge under your belt changes how you think about building software.

You don’t fear the cluster anymore.

You speak its language.

You know where your app fits in.

And that?

That’s power.


That’s A Wrap

That’s a wrap for Kubernetes without the tears.

If you made it this far — thank you. i hope this series made Kubernetes feel a little less scary and a lot more doable.

This was just the beginning. there’s so much more we can build — dev containers, more real-world apps, advanced scaling patterns — and I plan to keep writing about all of it.

If you liked this series, feel free to share it with a friend, a colleague, or that one person still afraid of YAML files.

And if you ever want to swap ideas, ask questions, or just nerd out — you can always find me at tjgokken.com.

Until next time — happy building.

0
Subscribe to my newsletter

Read articles from TJ Gokken directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

TJ Gokken
TJ Gokken

TJ Gokken is an Enterprise AI/ML Integration Engineer with a passion for bridging the gap between technology and practical application. Specializing in .NET frameworks and machine learning, TJ helps software teams operationalize AI to drive innovation and efficiency. With over two decades of experience in programming and technology integration, he is a trusted advisor and thought leader in the AI community