How to Deploy Todo Apps with Kubernetes: A Step-by-Step Guide

Neeraj GuptaNeeraj Gupta
4 min read

This blog will show how to deploy the todo application with Kubernetes.

Pre-requisite

We already wrote the Todo application blog with Docker-Compose Deployment, so refer to that blog too.
Link: https://minex.hashnode.dev/simple-docker-compose-deployment-for-python-based-todo-applications

About Kubernetes deployment

Here, we have two deployments: one for the todo app and another for the database(Postgres). Since the database is a stateful application, we have to use the stateful set for databases over deployment.

We are not using a native Kubernetes object like deployment, stateful set, or replica set and are using a custom controller to manage the database application. We are using CloudNativePG, one of the Kubernetes operators for the Postgres database.

Pros of using custom controllers like CloudNativePG

  • It provides cloud-native capabilities like self-healing, high availability, rolling updates, and scaling up/down of read-only replicas, among others.

  • High Availability

  • Disaster Recovery

  • Monitoring, etc.

Install CloudNativePG controller

kubectl apply --server-side -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.23/releases/cnpg-1.23.1.yaml

Build Image

We have to build the todo app image using the Dockerfile.
docker build -t minex970/todo-app:v1 .
docker push minex970/todo-app:v1

Note: We are using a separate namespace for this application, and used it for all the Kubernetes objects.
kubectl create namespace todo-ns

  1. Create a Secret object to store the database credentials.

     apiVersion: v1
     kind: Secret
     metadata:
       name: postgresql-cred
       namespace: todo-ns
     type: Opaque
     data:
       # encoded with base64 format.
       username: "dG9kb191c2Vy"
       password: "dG9kb19wYXNzd29yZA=="
    

    Validate:
    kubectl get secret

  2. Create the Config map object to store information like host, port, and database name.

     apiVersion: v1
     kind: ConfigMap
     metadata:
       name: postgresql-config
       namespace: todo-ns
     data:
       host: "postgresql-rw.todo-ns.svc.cluster.local"
       port: "5432"
       database_name: "todo_database"
    

    Validate:
    kubectl get cm

  3. Create the Config map object to store init.sql content (script to set up the schema).

     apiVersion: v1
     kind: ConfigMap
     metadata:
       name: postgresql-init
       namespace: todo-ns
     data:
       init.sql: |
         DROP TABLE IF EXISTS goals;
         CREATE TABLE IF NOT EXISTS goals (
           id SERIAL PRIMARY KEY,
           goal_name VARCHAR(255) NOT NULL
         );
         GRANT ALL PRIVILEGES ON TABLE goals TO todo_user;
         GRANT USAGE, SELECT, UPDATE ON ALL SEQUENCES IN SCHEMA public TO todo_user;
    

    Validate:
    kubectl get cm

  4. Create the Postgres pods with Cluster object.
    This is the CRD, which we installed to manage the postgres cluster in the Kubernetes.

     apiVersion: postgresql.cnpg.io/v1
     kind: Cluster
     metadata:
       name: postgresql
       namespace: todo-ns
     spec:
       instances: 3
       storage:
         size: 1Gi
       bootstrap:
         initdb:
           database: todo_database
           owner: todo_user
           secret:
             name: postgresql-cred
           postInitApplicationSQLRefs:
             configMapRefs:
             - name: postgresql-init
               key: init.sql
    

    Explanation:
    - This object will create 3 instances (pods) of postgres
    - Create the PV and PVC of 1Gb for each pod
    - It will also create 3 services like read, read-only, and read-write for postgres
    - It will initialize the SQL script after creating the pods.
    Validate:
    kubectl get clusters
    kubetctl get svc
    kubetctl get pv && kubetctl get pv
    kubetctl get pods

  1. Create the service object for the Todo app.

     apiVersion: v1
     kind: Service
     metadata:
       name: todo-app-svc
       namespace: todo-ns
     spec:
       selector:
         app: todo-app
       type: NodePort
       ports:
         - protocol: TCP
           port: 80
           targetPort: 8080
           nodePort: 30008
    

    Here, we are using NodePort service for this application.
    Validate:
    kubetctl get svc

  2. Create the deployment object for the Todo app.

     apiVersion: apps/v1
     kind: Deployment
     metadata:
       name: todo-app
       namespace: todo-ns
     spec:
       replicas: 1
       selector:
         matchLabels:
           app: todo-app
       template:
         metadata:
           labels:
             app: todo-app
         spec:
           initContainers:
           - name: check-database-service
             image: busybox
             command: ['sh', '-c', 'until nslookup postgresql-rw.todo-ns.svc.cluster.local; do echo waiting for db service; sleep 2; done;']
           containers:
           - name: todo-app
             image: minex970/todo-app:v1
             imagePullPolicy: Always
             env:
             - name: DB_HOST
               valueFrom:
                 configMapKeyRef:
                   name: postgresql-config
                   key: host
             - name: DB_PORT
               valueFrom:
                 configMapKeyRef:
                   name: postgresql-config
                   key: port
             - name: DB_NAME
               valueFrom:
                 configMapKeyRef:
                   name: postgresql-config
                   key: database_name
             - name: DB_USERNAME
               valueFrom:
                 secretKeyRef:
                   name: postgresql-cred
                   key: username
             - name: DB_PASSWORD
               valueFrom:
                 secretKeyRef:
                   name: postgresql-cred
                   key: password
             ports:
             - containerPort: 8080
             resources:
               limits:
                 memory: "128Mi"
                 cpu: "500m"
             readinessProbe:
               httpGet:
                 path: /health
                 port: 8080
               initialDelaySeconds: 5
               periodSeconds: 10
             livenessProbe:
               httpGet:
                 path: /health
                 port: 8080
               initialDelaySeconds: 15
               periodSeconds: 20
    

    Explanation:
    - We used the init container to make sure the database running before starting the todo application.
    - Using key ref parameter to fetch the required variable’s value.
    - Added resource limit on the pod.
    - Using the liveness and readiness probe to check the container's health.
    Validate:
    kubectl get deploy
    kubectl get pod
    Note: We have also created the blog on the init container topic, so refer to that.
    Link: https://minex.hashnode.dev/exploring-the-role-of-init-containers-in-kubernetes-applications

Final deployment

Note: Make sure to update the default namespace with todo-ns or use the namespace option in each following steps.

Command to change any default context, here we are only updating the current namespace.
kubectl config set-context --current --namespace=todo-ns
Verify:
kubectl config view --minify | grep namespace:

  • Create the postgres cluster.
    kubectl apply -f postgresSecret.yaml
    kubectl apply -f postgresConfigMap.yaml

    kubectl apply -f postgresInitConfigMap.yaml

    kubectl apply -f postgresCluster.yaml

  • Create the app deployment.
    kubectl apply -f appService.yaml
    kubectl apply -f appDeployment.yaml

Access application

As we have used the NodePort service, so use the following URL to access the application.

Get the node IP:
kubectl get nodes -o wide

URL:
http://<node-ip>:<nodePort>

Troubleshooting commands

kubectl port-forward <app-pod-name> 8080:8080
kubectl port-forward <database-pod-name> 5432:5432
kubectl exec -it <database-pod-name> -- bash

GitHub URL

https://github.com/minex970/python-based-todo-application

1
Subscribe to my newsletter

Read articles from Neeraj Gupta directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Neeraj Gupta
Neeraj Gupta