🚢 If You Don’t Get Pods, You’ll Never Get Kubernetes—Here’s Why

Vijay BelwalVijay Belwal
5 min read

When I first heard about Kubernetes, I thought I just needed to learn how to deploy containers. Simple, right?

But then came the term Pod — and I assumed it was just Kubernetes-speak for a container. I couldn’t have been more wrong.

Understanding Pods was the first time Kubernetes truly clicked for me. It’s like realizing the steering wheel isn’t the car — but without it, you’re going nowhere.

Let’s break it down from the ground up.


Why Kubernetes Doesn’t Run Containers Directly

A container is just a running process with its own filesystem, CPU, and memory limits.

But Kubernetes is much more than a process launcher. It needs to:

  • Monitor if your app is alive

  • Restart it if it crashes

  • Place it on the best machine

  • Allow it to talk to other apps

  • Scale it

To do all this reliably, Kubernetes doesn’t run containers directly. Instead, it wraps them inside a more manageable unit — a Pod.

Think of it like this: You don’t ship furniture as loose parts on a truck. You pack it properly in boxes. The box is the Pod — the container is the thing inside.


So What Is a Pod?

A Pod is the smallest deployable object in Kubernetes. It’s a wrapper around one or more containers and contains everything needed to run them.

It represents a single instance of a running application.

The word “Pod” was inspired by a pod of whales, linking nicely with Docker’s whale logo and Kubernetes’ maritime theme.

Most Pods contain just one container — but they can contain more. We’ll get to that in a bit.


Why Not Just Add More Containers to the Same Pod for Scaling?

Let’s say you want to scale your app. Wouldn’t it be easier to add more containers inside the same Pod?

At first glance, yes. But that approach breaks down when viewed through first principles:

  1. Resource Contention One greedy container can hog CPU or memory, starving the others. There’s no strong boundary.

  2. Failure Blast Radius If one container crashes or misbehaves, the whole Pod is affected. Kubernetes restarts the entire Pod, even if other containers were fine.

  3. Scheduling Constraints A Pod must be scheduled on a single node. More containers = more resources required = harder to schedule.

Because of this, Kubernetes scales by spinning up more Pods, each with one container.


Scaling in Real Life

Let’s say you’re running an e-commerce app and traffic suddenly spikes during a sale.

  • To scale up: Kubernetes creates more Pods running your app and distributes them across available nodes.

  • To scale down: It terminates some running Pods.

Think of it like this: You’re a bakery. Orders shoot up? You open more ovens (Pods). Orders drop? You shut a few. You don’t cram more cakes into the same oven.


Can a Pod Have Multiple Containers?

Yes, but not for scaling.

Multiple containers inside the same Pod are used for tight coupling — where containers need to work closely and share context.

Some common examples:

  • A sidecar container that handles logging or monitoring

  • A proxy container for networking

  • A file-sync service watching logs or configs

These containers:

  • Share the same network namespace — they talk via localhost

  • Share volume mounts — they can read/write the same files

Without this, you'd need complex setup between containers that are meant to work together. Pods make this relationship seamless.


Defining a Pod in YAML

Here’s a minimal pod-definition.yml file:

apiVersion: v1           # Schema version for the API
kind: Pod                # We're defining a Pod object
metadata:
  name: myapp-name       # Unique name for the Pod
  labels:
    app: abc-name        # Custom labels (can be anything), used for filtering
spec:
  containers:            # List of containers in the Pod
    - name: nginx-container
      image: nginx       # Container image to use

Why apiVersion: v1? This tells Kubernetes which version of the API to use when interpreting this file. For basic objects like Pods, it’s usually v1. Other objects like Deployments use apps/v1.

Why is containers: a list? Because a Pod can have multiple containers. Even if you have just one, it still needs to be defined as a list using a dash (-).


Basic Pod Commands

kubectl create -f pod-definition.yml    # Create a Pod from YAML
kubectl get pods                        # See running Pods
kubectl describe pod myapp-name         # Inspect details of a Pod

What Are Labels and Why Do They Matter?

In the YAML above, you saw this:

labels:
  app: abc-name

Labels are arbitrary key-value pairs that you can attach to any Kubernetes object.

Why is this powerful?

Let’s say you want to:

  • Deploy version v1 and v2 of an app simultaneously

  • Route traffic only to Pods with env=prod

  • Delete all Pods with tier=backend

You can filter, query, and act on objects using labels.

Important:

  • metadata has a fixed structure (e.g., name, namespace)

  • labels are flexible — you can define any custom key-value pairs


So What’s the Big Picture?

  • A Pod is a Kubernetes wrapper around containers

  • It's the smallest unit of deployment

  • You scale by creating more Pods — not more containers in one Pod

  • Multiple containers in a Pod are reserved for cases where they work together tightly

  • Labels are your best friend for filtering, organizing, and operating on Kubernetes objects


Final Thought

If you don’t get Pods, Kubernetes will always feel like a mystery. But once you understand this simple unit — how it encapsulates containers, abstracts deployment, and supports scaling — the rest of Kubernetes starts to feel like natural extensions.

Just like atoms form molecules, Pods form Deployments, Services, and entire applications.

Start here. Build up. And you’ll soon go from YAML copy-paster to Kubernetes whisperer. 🧘‍♂️🐳


0
Subscribe to my newsletter

Read articles from Vijay Belwal directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Vijay Belwal
Vijay Belwal