How to setup kind cluster for home lab
In this post, I would like to share how to setup a kind cluster on Ubuntu Linux. What is kind? kind is a tool for running local Kubernetes clusters using Docker container nodes.
! This is used for testing and learning purposes, not for production.
Installation From Release Binaries
Before we install Kind, we need to install Docker on our host.
sudo apt update
sudo apt install docker.io
sudo usermod -aG docker $USER
docker --version
After installing Docker, execute the below commands on the host to install kind command on your system.
# For AMD64 / x86_64
[ $(uname -m) = x86_64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-amd64
# For ARM64
[ $(uname -m) = aarch64 ] && curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.24.0/kind-linux-arm64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
After installation, we can check the kind version with the following command:
kind version
Bring up a multi node cluster
First, we need to create a config file “kind-cluster.yaml” with the below content:
This config file was created as per my requirements. If you want to increase or decrease nodes, you can remove or add them to the node list.
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
name: home-lab
nodes:
- role: control-plane
image: kindest/node:v1.29.2
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true,node-name=control-plane"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
- role: worker
image: kindest/node:v1.29.2
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-name=frontend"
- role: worker
image: kindest/node:v1.29.2
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-name=ci-cd"
- role: worker
image: kindest/node:v1.29.2
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-name=backend"
In the above config file, I separated 4 nodes for different usage with specific labels.
Let me explain the overview of my config file.
- role: control-plane
image: kindest/node:v1.29.2
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "ingress-ready=true,node-name=control-plane"
extraPortMappings:
- containerPort: 80
hostPort: 80
protocol: TCP
- containerPort: 443
hostPort: 443
protocol: TCP
This is the control-plane node configuration. In this configuration, I used the kubernetes version v1.29.2, and I have set up the control plane node with two custom labels: ingress-ready=true, node-name=control-plane. And extraPortMappings allows the local host to make requests to the ingress controller over ports 80/443. In this config file, I have set up Docker port 80 to localhost 80 and 443 as well. Because our kubernetes nodes are running on a Docker container. Extra port mappings can be used to port forward to the kind nodes.
- role: worker
image: kindest/node:v1.29.2
kubeadmConfigPatches:
- |
kind: JoinConfiguration
nodeRegistration:
kubeletExtraArgs:
node-labels: "node-name=ci-cd"
This is the worker node configuration. In this configuration, I have set up three worker nodes with separate custom labels: node-name=frontend, backend, ci-cd and kubernetes version is v1.29.2.
After creating the config file, we can start the 4 node cluster on the host.
kind create cluster --config kind-cluster.yaml
root@kind:~# kind create cluster --config kind-cluster.yaml
Creating cluster "home-lab" ...
✓ Ensuring node image (kindest/node:v1.29.2) 🖼
✓ Preparing nodes 📦 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-home-lab"
You can now use your cluster with:
kubectl cluster-info --context kind-home-lab
Thanks for using kind! 😊
After creating the cluster, we can check cluster-info using this command:
kubectl cluster-info --context kind-home-lab
Check the cluster status
Check with kind command:
kind get clusters
Check with docker command:
root@kind:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7f4e144f6627 kindest/node:v1.29.2 "/usr/local/bin/entr…" About an hour ago Up About an hour home-lab-worker3
f0a465e8f97a kindest/node:v1.29.2 "/usr/local/bin/entr…" About an hour ago Up About an hour 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 127.0.0.1:44431->6443/tcp home-lab-control-plane
590f25e6ac1a kindest/node:v1.29.2 "/usr/local/bin/entr…" About an hour ago Up About an hour home-lab-worker2
9a92052b0218 kindest/node:v1.29.2 "/usr/local/bin/entr…" About an hour ago Up About an hour home-lab-worker
root@kind:~#
Check with kubectl command:
root@kind:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
home-lab-control-plane Ready control-plane 61m v1.29.2
home-lab-worker Ready <none> 60m v1.29.2
home-lab-worker2 Ready <none> 60m v1.29.2
home-lab-worker3 Ready <none> 60m v1.29.2
root@kind:~#
From the output of the above commands, we can see that the cluster is running on version 1.29.2. Now, we can deploy our applications.
Thanks For Reading, Follow Me For More
Have a great day!..
Subscribe to my newsletter
Read articles from TECH-NOTES directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
TECH-NOTES
TECH-NOTES
I'm a cloud-native enthusiast and tech blogger, sharing insights on Kubernetes, AWS, CI/CD, and Linux across my blog and Facebook page. Passionate about modern infrastructure and microservices, I aim to help others understand and leverage cloud-native technologies for scalable, efficient solutions.