Kubernetes From Zero to Hero – Part 4: Understanding Pod Lifecycle and Deep-Diving Into Kubectl

Manas UpadhyayManas Upadhyay
3 min read

What We’ve Covered So Far

In the last few blogs, we’ve taken major strides:

  • Part 1: Understood what Kubernetes is and why it exists using real-world analogies.

  • Part 2: Explored the Kubernetes Architecture, both control plane and worker nodes.

  • Part 3: Set up Minikube on your local system and deployed your first pod using YAML.

Now that you have a working cluster and you’ve seen your first pod running, it’s time to go deeper.

In this blog, we’ll learn:

  • How a Pod’s lifecycle works from creation to deletion

  • How to use kubectl to inspect, debug, and manage pods

  • Common pod states and what they mean

  • Real-time debugging with logs, exec, and describe


What is a Pod, Again?

A Pod is the smallest deployable unit in Kubernetes.

It wraps one or more containers with:

  • Shared network namespace

  • Shared storage volumes

  • One IP address

    In most use cases, a Pod = One Container.


Pod Lifecycle Phases

When you apply a manifest or create a Pod, it goes through a defined series of phases:

1. Pending

  • Pod is accepted by the cluster but not yet running.

  • Reasons:

    • Scheduler hasn’t assigned it to a node

    • Container images still downloading

2. Running

  • Pod is scheduled and containers are executing.

  • Containers have passed readiness checks (if defined).

3. Succeeded

  • All containers terminated successfully (exit code 0).

  • Seen in Jobs, not typical in long-running pods.

4. Failed

  • One or more containers exited with non-zero code.

5. CrashLoopBackOff

  • Container failed, restarted, failed again in a loop.

  • Common cause: missing ENV vars, database connection failure, etc.


Using kubectl to Explore Pod Lifecycle

View All Pods

kubectl get pods

With more details:

kubectl get pods -o wide

Describe a Pod

kubectl describe pod <pod-name>
  • Shows node info, events, reasons for failure/success, environment, etc.

Check Pod Logs

kubectl logs <pod-name>

For multi-container pods:

kubectl logs <pod-name> -c <container-name>

Exec into a Running Pod

kubectl exec -it <pod-name> -- /bin/bash

This lets you debug live, check file system, curl endpoints, run tests, etc.


Restart a Pod (delete and recreate)

kubectl delete pod <pod-name>

If created by a Deployment, it will automatically spin back up


Real-Life Example: Simulate a CrashLoop

Let’s run a broken pod that always crashes:

crashy-pod.yaml

apiVersion: v1
kind: Pod
metadata:
  name: crash-loop
spec:
  containers:
  - name: fail-container
    image: busybox
    command: ["/bin/sh", "-c", "exit 1"]

Apply:

kubectl apply -f crashy-pod.yaml

Check:

kubectl get pods

Output:

crash-loop   0/1     CrashLoopBackOff   3 (5s ago)   1m

Inspect:

kubectl describe pod crash-loop
kubectl logs crash-loop

Delete:

kubectl delete pod crash-loop

What’s Next?

In the next blog, we’ll cover:

  • Deploying your app using a Deployment and ReplicaSet

  • Scaling up/down your app

  • Updating your app without downtime using rolling updates

0
Subscribe to my newsletter

Read articles from Manas Upadhyay directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Manas Upadhyay
Manas Upadhyay

I am an experienced AWS Cloud and DevOps Architect with a strong background in designing, deploying, and managing cloud infrastructure using modern automation tools and cloud-native technologies.