Kubernetes Deployment

Nweke HenryNweke Henry
4 min read

INTRODUCTION Deploying applications in Kubernetes is a key skill for any DevOps engineer or cloud-native developer. In this article, we'll explore what a Kubernetes deployment is, why it's important, and how to create one from scratch using practical examples.

What is a Kubernetes Deployment?

A Kubernetes Deployment is an object that provides declarative updates to applications. It ensures that a specified number of pod replicas are running at any given time. If a pod crashes or becomes unresponsive, the deployment controller replaces it automatically.

Think of a Deployment as a manager that ensures your desired state (number of pods, container image versions, etc.) is always maintained.

Key features of Deployments:

  • Declarative application management

  • Rolling updates and rollbacks

  • Scaling capabilities

  • Self-healing mechanisms

  • Revision history tracking

Imperative Approach

  • What it is: You tell Kubernetes exactly what to do and when to do it, typically by executing commands via kubectl.

  • Analogy: Like giving step-by-step instructions.

  • Example Command:

    
      kubectl create deployment nginx --image=nginx
      kubectl scale deployment nginx --replicas=3
    
  • Characteristics:

    • Fast and direct.

    • No config files required.

    • Good for quick tests and changes.

    • Hard to track or reproduce manual work isn’t versioned or reusable.

Declarative Approach

  • What it is: You describe the desired state of your system in configuration files (YAML/JSON), and Kubernetes makes the cluster match that state.

  • Analogy: Like writing down what you want your system to look like, and Kubernetes figures out how to make it so.

  • Example File (nginx-deployment.yaml):

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: nginx
        template:
          metadata:
            labels:
              app: nginx
          spec:
            containers:
            - name: nginx
              image: nginx
    
  • Characteristics:

    • Reproducible and version-controlled.

    • Scalable and maintainable.

    • Preferred for production environments.

    • Easier team collaboration using GitOps principles.

YAML?

YAML (YAML Ain't Markup Language) is a human-readable data serialization format commonly used for configuration files. It's widely adopted in DevOps tools, particularly in Kubernetes, Ansible, Docker Compose, and many others. These files tell Kubernetes what to create, how many replicas to run, what images to use, and how to expose them, using a declarative syntax.

📘 Example: Simple Deployment YAML

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest
        ports:
        - containerPort: 80

BASIC KUBERNETES COMMAND Deployments

1. Set Kubernetes context (if using multiple clusters):

kubectl config use-context <context-name>

Explanation: Selects which cluster you’re working with (if you have more than one configured).

2. View cluster nodes (confirm cluster is up):

kubectl get nodes

Explanation: Lists the nodes in the cluster. Confirms the cluster is healthy and accessible.

3. Create namespace (optional, but good practice):

kubectl create namespace <namespace-name>

Explanation: Creates a separate environment within the cluster to isolate resources.

4. Apply deployment (run your app):

kubectl apply -f deployment.yaml

Explanation: Deploys your application using the specs defined in the YAML file.

5. Check deployment status:

kubectl get deployments

Explanation: Shows active deployments and their health status.

6. Check pods (verify containers are running):

kubectl get pods

Explanation: Lists the pods (containers) running in the cluster.

7. View pod logs (troubleshoot if needed):

kubectl logs <pod-name>

Explanation: Displays logs from the pod to help debug errors.

8. Expose deployment (create service to access app):

kubectl expose deployment <deployment-name> --type=NodePort --port=80

Explanation: Makes your app accessible inside or outside the cluster (NodePort or LoadBalancer).

9. View services (get IP/Port to access app):

kubectl get svc

Explanation: Lists services and gives you the IP:Port to access your application.

10. Scale deployment (increase/decrease replicas):

kubectl scale deployment <deployment-name> --replicas=3

Explanation: Changes the number of pods running for your deployment

11. Describe resources (for detailed debugging info):

kubectl describe pod <pod-name>

Explanation: Provides in-depth information about the pod, including events and status.

12. Delete resources (when done):

kubectl delete -f deployment.yaml

Explanation: Deletes the deployment and associated pods.

Or clean up namespace:

kubectl delete namespace <namespace-name>

In this article, I walked through the essentials of deploying applications on Kubernetes using YAML manifests. We explored how to create a Deployment to manage application replicas and ensure high availability, and how to expose that Deployment using a Service. By understanding and applying these basic concepts, you can begin orchestrating containerized workloads efficiently and reliably in a Kubernetes cluster. Whether you're deploying a simple nginx server or a production-grade microservice, this foundational knowledge sets the stage for scaling, monitoring, and managing cloud-native applications like a pro.

0
Subscribe to my newsletter

Read articles from Nweke Henry directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Nweke Henry
Nweke Henry