Day 6 - Kubernetes Without The Tears


We’ve been hanging out with our Kubernetes clusters for five days now. We’ve got pods flying, deployments managing, labels everywhere like we’re tagging luggage at the airport. We had pods dying, resurrecting, Kubernetes autoscaling, making life extremely difficult for us when we are using Docker Desktop, but we survived all that.
But there’s still this big, slightly awkward elephant in the room: How does anyone actually talk to them?
I mean, inside the cluster life is good and in a perfect world, the cluster stays the same and running forever.
Yet, we know that is not the case. Pods come and go, so trying to communicate is like trying to text someone who switches SIM cards every two minutes. You try to connect, and—poof—they're gone. New IP, new pod, new who-even-knows-where.
We need something that knows where our apps are, even when they pack up and move mid-sentence.
In Kubernetes, that "something" is called a Service. As with other stuff we have seen so far with Kubernetes, it’s like magic. You just text and Kubernetes delivers. It’s like the telco you wish you had.
Let’s go meet the real MVP of app traffic.
Under the Hood
Your pods are dynamic little things. They come and go. They crash. They restart. They get recreated with shiny new IPs.
If you tried talking to them directly by IP address, you’d lose your mind by lunchtime. Imagine this:
You finally memorize the IP of a pod serving your web app. Great.
Five minutes later, the pod crashes. Kubernetes spins up a new one. New pod, new IP. You try connecting again... boom, dead connection.
Now imagine this happening across dozens of pods all day long. It’s like playing whack-a-mole — but with IP addresses.
Sure, you could fire up a VM, hope for the best, and pretend apps never crash. But then the user, your boss, the business, and possibly the whole universe will come knocking.
Since we do not want to upset the Universe, we use something called Services — your stable front door.
A Service in Kubernetes provides a stable internal IP address and DNS name, routes traffic to the correct set of pods (using label selectors), and load balances automatically across healthy pods
Think of it like the airport departure board: When you come to the airport, you have no idea what gate your flight’s at nor do you need to. The departure board knows where it is. You just follow the signs, and the system routes you correctly.
Types of Services
There are 3 types of services in Kubernetes. Here’s the quick lowdown:
Service Type | What It Means | When to Use It |
ClusterIP | Only accessible inside the cluster | App-to-app communication |
NodePort | Exposes app on static port on all nodes | Local dev, demos |
LoadBalancer | Gets a public IP from your cloud provider | Internet-facing production apps |
Let’s see how we deploy these services:
First, expose your deployment with a NodePort:
kubectl expose deployment nginx-deployment --type=NodePort --port=80
… and find out which port Kubernetes picked:
kubectl get service nginx-deployment
With the command above, you will see your Cluster-IP as well.
Then visit your app:
http://localhost:<NodePort>
(Replace <NodePort>
with the actual number you see.) We kind of did that on the first day when we deployed nginx server.
type=LoadBalancer
instead, and get a real public IP address automagically.Lab: Expose Your App
I know we will be repeating what we just did, but we need to have the Lab section. What’s a blog article without a Lab section?
Step 1: Expose your deployment
kubectl expose deployment nginx-deployment --type=NodePort --port=80 -n dev-space
Step 2: Get your NodePort
kubectl get service nginx-deployment -n dev-space
You’ll see something like:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-deployment NodePort 10.96.182.22 <none> 80:30547/TCP 25s
Now your app is live at:
http://localhost:30547
(Your port number will probably be different.)
So, in a nutshell, this is how Kubernetes tells you where your pods are hanging out — what IP addresses and ports they’re using. You always know where they are, no matter how much they shuffle around. Magical, isn’t it?
Bonus Lab: DNS-Based Access Instead of IP:Port
The section above is magical, but let’s face it: when you have many pods going, chasing down the IP addresses is kind of a pain. The good news is, Kubernetes thought of that for you too.
You see, inside the cluster, Kubernetes lets apps talk to each other by service names instead of chasing down pod IPs.
You don’t need to memorize or even care about what IP address your pod currently has. Just call it by its service name, like:
http://nginx-deployment.dev-space.svc.cluster.local
Let’s take a step back and see how this works. Remember, how we looked at our service and got something like this back?
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx-deployment NodePort 10.96.182.22 <none> 80:30547/TCP 25s
Right. Now, by Kubernetes convention the full service name format is:
<service-name>.<namespace>.svc.cluster.local
In our case above, the service name would be:
nginx-deployment.dev-space.svc.cluster.local
hence;
http://nginx-deployment.dev-space.svc.cluster.local
Even if you exec into a different pod, you can still access to this one using the DNS name.
Come on, if that’s not convenient I don’t know what is.
Bonus Lab 2: App-to-App Communication (Service-to-Service)
Let's take it one step further and create a second tiny app which will call our nginx app using the service DNS name.
This shows real-world service-to-service magic inside a cluster!
Step 1: Create a "client" pod
We’ll spin up a quick pod running curl
so we can simulate another app.
kubectl run curl-client -n dev-space --image=curlimages/curl -it --restart=Never -- /bin/sh
This command will:
Create a pod called
curl-client
indev-space
Use the super lightweight
curlimages/curl
containerStart an interactive shell (
/bin/sh
)
You should now be inside the curl-client
pod.
Step 2: Call the nginx-deployment service via DNS
Inside the pod shell, run:
curl http://nginx-deployment.dev-space.svc.cluster.local
curl-client
pod, you might see a ~$
prompt instead of the usual #
. Don’t worry — it just means you’re in a lightweight shell (like ash instead of bash). Your curl
command will work exactly the same.If everything is set up, you should get a basic nginx welcome page or HTTP 200 OK. Here is what I got back:
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
Without even knowing the pod IPs. Even if nginx pods restart, get new IPs, etc.
Step 3: Exit the shell
exit
Done and dusted.
If you want to clean up the client pod later:
kubectl delete pod curl-client -n dev-space
You now officially witnessed service discovery in Kubernetes. One app calling another — automatically, reliably, and beautifully.
And just like that, we proved it: No matter how many times your pods shuffle around, the service DNS stays rock solid. You’re not chasing IP addresses anymore — Kubernetes handles the crazy moving parts behind the scenes.
We just built a reliable in-cluster communication system without even breaking a sweat.
Pretty cool, huh?
Oh, a little disclaimer: no IP addresses were harmed in the making of this demo.
Bonus Playground
Now that you know how pods and services talk inside Kubernetes, you might be thinking:
"Hey, it’d be kinda cool if I had my own mini cluster to play with..."
Well, I have good news and bad news. The good news is that you can. The bad news is you need to read this section. OK, maybe not so bad. It is quick.
Well we have K8s. It looks like the powers that be in the Kubernetes world do not like to come up with names, so they came up with k3d. This is what we will be using to have our small private cluster.
K3d is like Kubernetes... but shrunk down and running inside Docker containers. Fast, painless, no PhD in DevOps needed.
Here’s how you spin up your very own cluster:
Step 1: Install k3d
If you have Docker running, install k3d with:
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
Step 2: Create your cluster
Launch a cluster with port mapping:
k3d cluster create mycluster -p "8080:80@loadbalancer"
Step 3: Deploy something
Now that you have a real cluster running locally, you can use your new Kubernetes superpowers:
kubectl create deployment mynginx --image=nginx
kubectl expose deployment mynginx --port=80 --type=LoadBalancer
kubectl get svc
curl localhost:8080
If you see the nginx welcome page, congrats — you’re now flying your own cluster like a pro.
Or… you got a 404. So what happened?
🐞 Why You Got a 404
The default nginx
image doesn’t actually serve a homepage unless you mount one. It’s not broken — it’s just empty.
Here are your options:
Option 1: Use httpd
instead
kubectl delete svc mynginx
kubectl delete deployment mynginx
kubectl create deployment mynginx --image=httpd
kubectl expose deployment mynginx --port=80 --type=LoadBalancer
Then:
curl localhost:8080
You’ll get:<html><body><h1>It works!</h1></body></html>
Option 2: Add an index manually
kubectl exec -it deploy/mynginx -- /bin/sh
echo "hello world" > /usr/share/nginx/html/index.html
exit
curl localhost:8080
Option 3: Test inside the cluster using DNS
kubectl run curl-client --image=curlimages/curl -it --rm -- /bin/sh
curl http://mynginx.default.svc.cluster.local
This confirms everything’s wired up internally — and you didn’t even need to chase ports.
nginx
image doesn't serve anything by default. If you want to actually see a response, use httpd
(kubectl create deployment mynginx --image=httpd
), or exec into the pod and add an index.html
(echo "hello world" > /usr/share/nginx/html/index.html
), or test from inside the cluster using a curl pod (kubectl run curl-client --image=curlimages/curl -it --rm -- /bin/sh
).Why Bother?
Yeah, things can go spectacularly sideways in Kubernetes, and it can get complex real quick.
Yet, it is worth it because having a personal Kubernetes playground means you can break things safely, you can deploy real apps, you don’t need a cloud provider or mortgage your house to pay for cloud bills.
It’s the ultimate no-tears setup. Congrats, You now have your own private cloud. AWS, Azure, Google - watch your backs!
What’s Next
We’ve reached the gates. Apps are running. Configs are injected. Probes are checking health. Services are routing traffic and even talking to each other.
Tomorrow’s our final day! We’re going to pull everything together into a tiny project — configs, scaling, services, the whole deal — and show how everything we've learned fits into a real-world mini app.
We’ve built the airport. We have the planes. but they are kind of flying in a circle. Now it’s time to land the planes.
Subscribe to my newsletter
Read articles from TJ Gokken directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

TJ Gokken
TJ Gokken
TJ Gokken is an Enterprise AI/ML Integration Engineer with a passion for bridging the gap between technology and practical application. Specializing in .NET frameworks and machine learning, TJ helps software teams operationalize AI to drive innovation and efficiency. With over two decades of experience in programming and technology integration, he is a trusted advisor and thought leader in the AI community