Kubernetes (Multi-node + Real-World Workflows)

ZenvilaZenvila
6 min read

Kubernetes (Multi-node + Real-World Workflows)

What we are going to learn and implement in a multi-node cluster in Kubernetes:

In our previous implementation, we set up a single-node cluster using Minikube (Minikube is best suited for single-node clusters).

Check out the blog:

Now, let’s go one step ahead and look at what we’re going to learn and implement:

TopicWhat is it?
kubectlCLI tool to control Kubernetes
DeploymentA controller for managing replicas of your app
ServiceExposes your pods over network
IngressExposes HTTP(S) traffic to services
ConfigMapStore non-sensitive config data
SecretStore sensitive data (e.g., passwords)
VolumesStore persistent data (e.g., DB files)
HelmPackage manager for Kubernetes
k3s / kindLightweight Kubernetes for multi-node setup

Note: We're implementing this on Arch Linux. If you're using Ubuntu or another distro, adjust accordingly.


Install Required Tools on Arch

Install kubectl

sudo pacman -S kubectl

kubectl is the CLI tool to interact with your Kubernetes cluster.

Install Docker (Required for kind)

Because Kubernetes creates pods and nodes on containers, Docker is essential.

sudo pacman -S docker
sudo systemctl start docker
sudo systemctl enable docker
sudo usermod -aG docker $USER

Re-login to apply group changes.

Install kind (Multi-node Kubernetes using Docker)

k3s or kind are lightweight options for setting up multi-node clusters.

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.20.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

Create a Multi-Node Cluster (with kind)

So far, we’ve installed the necessary tools. Now we’ll implement a multi-node cluster with multiple pods running inside.

Create a kind-config.yaml file:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker

Explanation:

  • kind: defines the resource type as a Kubernetes cluster.

  • apiVersion: specifies the version of the kind API.

  • nodes: lists the machines in the cluster.

    • A control-plane node manages the cluster (scheduling, API, etc.).

    • Worker nodes run actual application containers.

This setup creates a cluster with 1 control-plane and 2 workers.

Create the Cluster:

kind create cluster --config kind-config.yaml --name haris-cluster

Expected Output:

  • Creating cluster "haris-cluster"...

  • ✅ Ensuring node image (kindest/node:v1.27.3)

  • ✅ Preparing nodes

  • ✅ Writing configuration

  • ✅ Starting control-plane

  • ✅ Installing CNI (networking)

  • ✅ Installing StorageClass

  • ✅ Joining worker nodes

  • ✅ Set kubectl context to "kind-haris-cluster"

You can now check the cluster:

kubectl cluster-info --context kind-haris-cluster

To check the created nodes:

kubectl get nodes

You should see:

  • One control-plane

  • Two worker nodes

The control node handles all services like scheduling, pod management, and DNS.
The worker nodes are responsible for running the actual services (apps).


Core Kubernetes Components (With Examples)

Deployments

💡 What is it?
Manages app replicas. If a pod fails, Deployment restarts it.

🛠 Example: NGINX Deployment

kubectl create deployment nginx-deploy --image=nginx
kubectl get deployments
kubectl get pods

Deployment helps automate service availability. If some pods fail due to error, Kubernetes automatically restarts or replicates them. This is how real-world websites stay resilient.


Services

💡 What is it?
Exposes your pod as a network service within the cluster. You specify a port for communication.

🛠 Expose NGINX as ClusterIP:

kubectl expose deployment nginx-deploy --port=80 --target-port=80
kubectl get services

Ingress

💡 What is it?
Ingress exposes services to external HTTP(S) traffic.

Without Ingress, you’d need NodePort or LoadBalancer for each service.
Ingress centralizes HTTP routing and SSL handling for all your apps.

Install Ingress Controller (Nginx):

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.1/deploy/static/provider/kind/deploy.yaml

Wait for pods:

kubectl get pods -n ingress-nginx

Create ingress.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: example-ingress
spec:
  rules:
  - host: example.local
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-deploy
            port:
              number: 80

This rule tells NGINX: "Route http://example.local to nginx-deploy service"

Apply it:

kubectl apply -f ingress.yaml

📌 If you get an error, delete the controller pod and it will auto-restart:

kubectl delete pod -n ingress-nginx -l app.kubernetes.io/component=controller

Watch in real time:

kubectl get pods -n ingress-nginx -w

Add to /etc/hosts:

sudo vim /etc/hosts
# Add:
127.0.0.1 example.local

Then open: http://example.local


Secrets

💡 What is it?
Secrets store sensitive data like:

  • Passwords

  • API keys

  • TLS/SSL certs

kubectl create secret generic db-secret --from-literal=password=SuperSecret
kubectl get secrets
kubectl describe secret db-secret

Secrets are base64-encoded (not encrypted by default). Access them via environment variables or volume mounts.


ConfigMaps

💡 What is it?
Stores non-sensitive config like ENV values, URLs, etc.

kubectl create configmap app-config --from-literal=APP_ENV=production
kubectl get configmap
kubectl describe configmap app-config

Advantages:

  • Keep code separate from config

  • Switch between dev, staging, prod easily


Volumes

💡 What is it?
Persistent storage for data — survives container restarts.

Example:

apiVersion: v1
kind: Pod
metadata:
  name: volume-pod
spec:
  containers:
  - name: app
    image: nginx
    volumeMounts:
    - mountPath: /data
      name: host-storage
  volumes:
  - name: host-storage
    hostPath:
      path: /tmp/data
      type: DirectoryOrCreate

Explanation:

  • volumeMounts: Inside container, /data maps to your host’s /tmp/data

  • hostPath: Uses your system path and creates it if not present

Useful for storing logs, uploads, configs, etc.


Helm (Kubernetes Package Manager)

Helm is like apt, pacman, or npm — but for Kubernetes apps (called charts).

Install Helm:

curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash

Add Repo and Install Chart

helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
helm search repo bitnami/nginx
helm install my-nginx bitnami/nginx

What is Bitnami?

A trusted source of pre-packaged Helm charts:

  • Secure

  • Updated regularly

  • Easy to customize

It's like the "official app store" for Kubernetes.

Useful Commands:

kubectl get all
helm list
TermMeaning
HelmTool to manage Kubernetes apps
ChartPre-built package of Kubernetes files
RepoSource of Helm charts
InstallDeploy apps in one command

Conclusion

In this blog, we explored how to set up a multi-node Kubernetes cluster using kind on Arch Linux. From setting up deployments, services, ingress, config maps, and secrets, to working with persistent storage and Helm charts — you’ve now got a solid real-world workflow under your belt. 🧠🚀

Keep experimenting, and soon, Kubernetes will feel like second nature!


P.S.

If you spot any mistakes, feel free to point them out — we’re all here to learn together! 😊

Haris
FAST-NUCES
BS Computer Science | Class of 2027

🔗 Portfolio: zenvila.github.io
🔗 GitHub: github.com/Zenvila
🔗 LinkedIn: linkedin.com/in/haris-shahzad-7b8746291
🔬 Member: COLAB (Research Lab)

0
Subscribe to my newsletter

Read articles from Zenvila directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Zenvila
Zenvila

I'm Haris aka Zen, currently in my 4th semester of Computer Science at FAST-NUCES and a member of COLAB (Research Lab) in Tier 3. I'm currently exploring AI/ML in its early stages, and also focusing on improving my problem-solving techniques. 🐧 Proud user of Arch Linux | Command line is my playground. I'm interested in Automation & Robotics Automation enthusiast on a mission to innovate! 🚀 Passionate about turning manual tasks into automated brilliance.