Run a Multi-Node Kubernetes Cluster Locally with KIND

Why Run Kubernetes Locally?
Kubernetes has revolutionised how we deploy applications, but who wants to spin up a cloud-based cluster just to test a configuration change? Enter KIND (Kubernetes IN Docker) — your ticket to running multi-node Kubernetes clusters without emptying your wallet or waiting for cloud provisioning.
Developing on Kubernetes shouldn’t require an AWS account or a constant internet connection. Local clusters give you:
Lightning-fast iteration cycles
Complete freedom to experiment without incurring cloud costs
A safe sandbox for testing breaking changes before they break production
In this blog post, we’ll build a fully-functional multi-node Kubernetes playground using KIND, complete with NGINX Ingress for realistic routing. Best of all, we’ll define everything as code using YAML configurations and scripts, so you can version-control your development environment just like your application code.
What is KIND?
KIND transforms Docker containers into Kubernetes nodes through some clever containerception magic. Its purpose-built for developers who need a realistic cluster environment without the operational overhead. Unlike Minikube, KIND supports true multi-node setups, making it perfect for testing distributed system behaviours, network policies, and node affinity rules.
Pre-requisites
brew install kind # macOS/Linux
## optional k9s for cluster management
brew install k9s
We will be using Podman for our example.
Why Podman Over Docker?
Before diving into the setup, let’s highlight why Podman offers advantages over Docker:
Daemonless Architecture: Podman doesn’t require a daemon process, reducing attack surface and resource usage. Each container runs directly under your user instead of through a central service.
Rootless Containers: Podman supports rootless containers out of the box, improving security by running containers with regular user permissions rather than root.
Systemd Integration: Podman containers can properly run systemd, which is particularly helpful for simulating real node behavior in KIND.
OCI Compliance: Podman follows Open Container Initiative standards strictly, ensuring better compatibility with other container tools.
Pod-Native Support: As the name suggests, Podman handles pods natively, aligning better with Kubernetes concepts.
Setup
Step 1: Define the Cluster (YAML Config)
Create a kind-config.yaml
to specify a control-plane + 2 worker nodes and enable Ingress:
# kind-config.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true"
extraPortMappings:
- containerPort: 80
hostPort: 8080
protocol: TCP
- containerPort: 443
hostPort: 8443
protocol: TCP
- role: worker
- role: worker
networking:
apiServerAddress: "127.0.0.1" # This specifies the address where the Kubernetes API server will be accessible. In this case, it is bound to the local loopback address (127.0.0.1), meaning the API server will only be accessible from the local machine.
apiServerPort: 6443 # This specifies the port on which the Kubernetes API server will listen. The default port for the Kubernetes API server is 6443.
Step 2: Bootstrap the Cluster (Script)
#!/bin/bash
# save this script as setup-kind.sh
# While KIND defaults to Docker, we’ll use Podman for its security benefits. Here’s how to switch providers..
export KIND_EXPERIMENTAL_PROVIDER=podman
# Create the cluster
kind create cluster --name multi-node --config kind-config.yaml
# Install NGINX Ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/main/deploy/static/provider/kind/deploy.yaml
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=90s
# Verify
kubectl get nodes
kubectl get pods -n ingress-nginxSave the file as setup-kind.sh Make it executable and run:
chmod +x setup-kind.sh # Make the script executable
./setup-kind.sh # Run the script
Now, if you do a kubectl get nodes, you should see something like
NAME STATUS ROLES AGE VERSION
multi-node-control-plane Ready control-plane 30s v1.27.3
multi-node-worker Ready <none> 25s v1.27.3
multi-node-worker2 Ready <none> 25s v1.27.3
Testing Your Cluster with a Sample Application
Let’s verify our setup by deploying a simple application with an ingress:
# Save as sample-app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
spec:
replicas: 3
selector:
matchLabels:
app: hello-app
template:
metadata:
labels:
app: hello-app
spec:
containers:
- name: hello-app
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
selector:
app: hello-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: hello.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello-service
port:
number: 80
Deploy the application:
kubectl apply -f sample-app.yaml
Add the host to your /etc/hosts
file:
echo "127.0.0.1 hello.local" | sudo tee -a /etc/hosts
Now, you can access your application at http://hello.local:8080 (Should return “Hello, World!”)
Verification Steps
# Cluser
kubectl get nodes # Should show `multi-node-control-plane`, `multi-node-worker`, etc.
# Ingress
curl -v http://hello.local:8080 # Should return "Hello, World!"
Advanced Tips for Podman Users
Once you’ve got the basics down, here are some power moves to level up your KIND with Podman experience:
- Configure resource limits: Set resource limits for your Podman containers
podman update --cpus=4 --memory=8g multi-node-control-plane
podman update --cpus=2 --memory=4g multi-node-worker
podman update --cpus=2 --memory=4g multi-node-worker2
2. Load local container images: Skip the registry push/pull cycle during development:
podman save myapp:latest -o myapp.tar
kind load image-archive myapp.tar --name multi-node
3. Leverage rootless containers: Run KIND in rootless mode for enhanced security:
# Ensure you're not running as root
export KIND_EXPERIMENTAL_PROVIDER=podman
export KIND_EXPERIMENTAL_ROOTLESS=true
kind create cluster --name rootless-cluster
Note: Rootless mode may restrict binding to ports < 1024. Use host ports like 8080/8443 if running without sudo
4. Use Podman pods directly: For simple scenarios, you can even use Podman’s native pod capabilities:
podman pod create --name my-pod
podman run --pod my-pod -d myapp:latest
Conclusion
Combining KIND with Podman transforms local development workflows from hours of frustration to minutes of productivity. With a multi-node cluster running locally, you can validate everything from pod scheduling to network policies without leaving your laptop.
The benefits of using Podman over Docker become especially apparent in enterprise environments or when security is a concern. The rootless operation, daemonless architecture, and better systemd integration make for a more robust development environment that more closely mirrors production.
The next time someone says, “but it works in production,” you can confidently reply, “it works identically in my KIND cluster, too.”
Remember that the goal of local development is to fail fast and iterate quickly. KIND with Podman gives you the freedom to experiment, break things, and learn without the anxiety of affecting shared environments or incurring cloud costs.
What Kubernetes features are you most excited to test in your local KIND cluster? Let me know in the comments!
Subscribe to my newsletter
Read articles from RISHIK ROHAN directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
