Your First Kubernetes Deployment: Running an Application


Introduction
Welcome back! In our last post, we successfully set up a local Kubernetes cluster using Minikube or Kind. Our playground is ready, and we've tested our connection with kubectl. Now, it's time for the moment we've been building towards: running our first application on Kubernetes.
We've learned that Pods are the smallest unit that can run our containers. But in the real world, we rarely create Pods by themselves. Why? Because they're mortal! If a Pod dies or the node it's on fails, it's gone for good. We need a higher-level controller to manage the lifecycle of our Pods for us.
Enter the Deployment.
A Deployment is a powerful Kubernetes object that provides declarative updates for Pods. You tell the Deployment your desired state—for example, "I want three replicas of my Nginx web server running"—and the Deployment works tirelessly to ensure that state is always maintained.
In this guide, you will:
Understand why Deployments are essential for managing applications.
Write your first Kubernetes YAML manifest file.
Use kubectl to create a Deployment.
Inspect the Deployment and see the Pods it creates.
Learn how to expose and access your running application.
From Pods to Power: Why Deployments?
A Deployment takes care of several critical tasks that you would otherwise have to handle manually:
Self-Healing: If a Pod crashes or gets deleted, the Deployment's controller notices and automatically creates a new one to replace it.
Scaling: Need to handle more traffic? You can tell the Deployment to scale up the number of Pods with a single command.
Controlled Rollouts: When you want to update your application to a new version, the Deployment can perform a seamless "rolling update," creating new Pods while gracefully terminating old ones, ensuring zero downtime for your users. We'll cover this in our next post!
Under the hood, a Deployment manages a ReplicaSet, which is another object whose job is simply to ensure a specified number of replica Pods are running. You'll rarely interact with ReplicaSets directly; think of them as an implementation detail managed by the Deployment.
Creating Your First Deployment: The YAML Manifest
In Kubernetes, we define the desired state of our objects using YAML files, often called "manifests." Let's create our first one.
Create a new file named nginx-deployment.yaml and add the following content:
# nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.23
ports:
- containerPort: 80
Let's break this down piece by piece:
apiVersion: apps/v1: This tells Kubernetes which version of the API to use to create this object. Deployments live in the apps/v1 group.
kind: Deployment: The type of object we want to create.
metadata: This is data about the object itself, like its name. We're naming our Deployment nginx-deployment.
spec: This is the most important part—it's where we define our desired state.
replicas: 2: We are telling the Deployment that we want exactly two identical Pods running at all times.
selector: This section defines how the Deployment finds the Pods it is supposed to manage.
- matchLabels: It looks for Pods with labels that match app: nginx. This is the crucial link.
template: This is the blueprint for the Pods that the Deployment will create. It has its own metadata and spec.
metadata.labels: We apply a label of app: nginx to our Pods. This is how the selector above can find them!
spec.containers: Here we define the list of containers to run inside each Pod (in our case, just one).
name: nginx: A name for our container.
image: nginx:1.23: The Docker container image to pull and run.
ports.containerPort: 80: Informs Kubernetes that this container listens on port 80.
Applying the Manifest
With your local cluster (Minikube or Kind) running, open your terminal in the same directory where you saved nginx-deployment.yaml.
Now, use the kubectl apply command. This is the primary command for creating or updating Kubernetes objects from a manifest file.
kubectl apply -f nginx-deployment.yaml
# Expected output:
# deployment.apps/nginx-deployment created
Inspecting Your Work
You've told Kubernetes your desired state. Now, let's ask Kubernetes what the current state is.
Check the Deployment:
kubectl get deployments # Output will look like this: # NAME READY UP-TO-DATE AVAILABLE AGE # nginx-deployment 2/2 2 2 15s
This shows our Deployment exists, and it has 2 out of 2 desired Pods READY. Success!
Check the ReplicaSet:
See the ReplicaSet the Deployment created to manage the Pods.kubectl get replicasets # or 'rs' for short kubectl get rs
Check the Pods:
This is the most important one. Let's see the Pods that were created by the ReplicaSet, which is managed by the Deployment.kubectl get pods # Output will look like this (pod names will be random): # NAME READY STATUS RESTARTS AGE # nginx-deployment-5754944d6c-7qg8g 1/1 Running 0 45s # nginx-deployment-5754944d6c-f2k9x 1/1 Running 0 45s
You can see two Pods running, both with names prefixed by the Deployment name (nginx-deployment-). Kubernetes is now actively ensuring two of these Pods are always running. To test this, try deleting one:
# Replace with one of your actual pod names kubectl delete pod nginx-deployment-5754944d6c-7qg8g # Now, immediately check the pods again kubectl get pods
You'll see that a new Pod is already being created to replace the one you deleted. This is self-healing in action!
Exposing Your Application: A Quick Preview of Services
We have two Nginx Pods running, but they only have internal cluster IP addresses. We can't access them from our web browser yet. To bridge the gap from inside the cluster to the outside world, we need a Service.
Services are a deep and critical topic in Kubernetes networking, providing stable IP addresses, load balancing, and service discovery. We have an entire module dedicated to networking coming up later in this series where we'll explore them in detail (including ClusterIP, LoadBalancer, and Ingress).
For now, our goal is just to see our Nginx page. We'll use a simple, direct method to quickly expose our application for testing: creating a Service of type NodePort. This type opens a specific port on every Node in our cluster, mapping it to our application's port.
Think of this as a temporary access hatch, not the main entrance.
We can create this Service with a simple kubectl expose command:
kubectl expose deployment nginx-deployment --type=NodePort --port=80
# Expected output:
# service/nginx-deployment exposed
This command tells Kubernetes to create a new Service that finds the Pods managed by nginx-deployment and exposes them.
Now, check the service you created:
kubectl get service nginx-deployment
# or 'svc' for short
kubectl get svc
You'll see an output like 80:31234/TCP. This tells you the Service is mapping the external NodePort (e.g., 31234) to the internal container port (80).
Accessing Nginx
If you are using Minikube: Minikube has a handy command to get the URL.
minikube service nginx-deployment
This will automatically open the Nginx welcome page in your browser!
If you are using Kind: You'll need to do a little port-forwarding from your local machine to the cluster.
First, get the NodePort from the get svc command (e.g., 31234).
Then, find the IP address of your Kind control-plane container: docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' kind-control-plane. Let's say it's 172.18.0.2.
Now you can access it at http://<container-ip>:<node-port>, for example http://172.18.0.2:31234.(A simpler way for Kind): You can also use kubectl port-forward directly to the service:
# This will forward your local port 8080 to the service's port 80 kubectl port-forward service/nginx-deployment 8080:80
Now, open your browser and go to http://localhost:8080. You should see the Nginx welcome page!
Cleaning Up
To delete the resources you created, you can delete the Deployment. This will automatically delete the associated ReplicaSet and Pods.
kubectl delete deployment nginx-deployment
kubectl delete service nginx-deployment
Conclusion
Fantastic work! You have officially deployed, exposed, and accessed your first containerized application on Kubernetes. You've gone from an empty cluster to a running, self-healing, and scalable application managed by a Deployment.
Most importantly, you've experienced the fundamental rhythm of working with Kubernetes: writing a YAML manifest, using kubectl apply, and inspecting the results.
We also had a brief preview of Services, using a NodePort as a quick way to access our application. Don't worry if that felt a bit like magic; we will demystify every aspect of Kubernetes networking in a future module. For now, you've successfully achieved the main goal: getting an application up and running.
What's Next?
Our application is running, but what happens when we need to update it? In our next post, "Updating Applications Seamlessly: Rolling Updates and Rollbacks," we'll explore one of the most powerful features of Deployments: the ability to perform zero-downtime updates and quickly roll back if something goes wrong.
Subscribe to my newsletter
Read articles from Shrihari Bhat directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by