Running Kubernetes on Bare Metal: A Step-by-Step Guide

Table of contents
- Prerequisites and Environment
- Container Runtimes: Docker vs. containerd
- Installing Kubernetes Components (kubeadm, kubelet, kubectl)
- Initializing the Control Plane (Master Node)
- Installing a Pod Network Add-on
- Joining Linux Worker Nodes
- Adding a Windows Worker Node (Optional)
- Verifying the Cluster and Deploying a Sample App
- Tips and Next Steps

Kubernetes on bare metal means running Kubernetes directly on physical servers without a virtualization layer. In this setup you have an “empty” server (no hypervisor or cloud VM) giving your containers direct access to hardware. This yields high performance and low latency, ideal for workloads like high-performance computing or large databases. It also offers full control over networking, storage, and compute resources. In this guide we’ll walk through setting up a bare-metal Kubernetes cluster step by step, using modern tools and best practices, on both Linux (Ubuntu or Fedora) and Windows nodes.
Prerequisites and Environment
Before starting, gather hardware and software prerequisites:
Hardware: At least one master/control-plane machine (e.g. 4+ CPU cores, 16+ GB RAM, 100+ GB disk) and one or more worker machines (2+ cores, 8+ GB RAM). Ensure each node has a static IP address and is on the same network (or properly routed).
Operating Systems: Use a current OS. For Linux, Ubuntu 22.04+ or Fedora 38/39 is recommended. If using a Windows node, use Windows Server 2022 or (Clients like Fedora may use the
dnf
package manager instead ofapt
.)SSH access: You will need root or sudo access on each machine. SSH should be enabled on Linux nodes.
Networking: All nodes should be able to reach each other’s IPs (adjust
/etc/hosts
or DNS for hostnames, as needed. Edit/etc/hostname
and/etc/hosts
on Linux nodes to assign clear names (e.g.master-node
,worker01
, etc.).Swap off: Kubernetes requires swap to be disabled on Linux nodes. You can turn off swap temporarily with
sudo swapoff -a
and remove any swap entries in/etc/fstab
.Cluster design: Plan at least one master and one or more worker nodes. You’ll use
kubeadm
to bootstrap the cluster. We’ll illustrate with Ubuntu commands, but note equivalentdnf
commands for Fedora where needed.
Container Runtimes: Docker vs. containerd
Kubernetes relies on a container runtime to run containers. Popular choices include Docker Engine, containerd, and CRI-O. Docker Engine is a full container platform (with Docker CLI and daemon), while containerd is a lightweight runtime focused on running containers (Docker itself actually uses containerd under the hood). Since Kubernetes v1.24, direct Docker support was deprecated - Kubernetes uses the CRI interface.
In practice, you can install Docker (which bundles containerd) or just install containerd directly. Both are fine:
On Ubuntu: You can install containerd via
apt
. For example:sudo apt update sudo apt install -y containerd
This gives you a minimal container runtime. After installing containerd, you should configure it to use systemd for cgroups (Kubernetes requires
SystemdCgroup = true
in/etc/containerd/config.toml
).On Fedora: Fedora provides containerd as an RPM. You can install it with
dnf
, for example:sudo dnf install -y containerd docker-cli
This installs containerd and Docker CLI tools docs.fedoraproject.org.
Enable and start containerd:
sudo systemctl enable --now containerd
On Windows Server: Windows nodes typically run Windows containers. The Kubernetes docs provide PowerShell scripts to install containerd on Windows Server 2022+ kubernetes.io. Alternately, you can install Docker Enterprise/Insight or Mirantis Container runtime. After installing containerd (or Docker), ensure Windows Containers feature is enabled.
In summary, you need some CRI-compatible runtime on each node. Containerd is simple and often recommended. Docker Engine will work too (it includes containerd internally). Once containerd (or Docker) is running on all nodes, move on to the Kubernetes components.
Installing Kubernetes Components (kubeadm, kubelet, kubectl)
Next, install the Kubernetes tools. We recommend using kubeadm to bootstrap the cluster, kubelet as the node agent, and kubectl as the CLI. On Linux (Ubuntu/Fedora), you can use the official apt/yum repositories:
Add Kubernetes repos and keys (Ubuntu example):
sudo apt-get update sudo apt-get install -y apt-transport-https ca-certificates curl gnupg curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.33/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.33/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list sudo apt-get update
(Use version 1.33 or latest stable in the URL.) Then install:
sudo apt-get install -y kubelet kubeadm kubectl sudo apt-mark hold kubelet kubeadm kubectl
This sets up kubeadm and friends on all Linux nodes. For Fedora, you can use
dnf
with a similar repo (or usednf install -y kubelet kubeadm kubectl
after adding the Kubernetes repo).Disable swap on Linux (if not done already), since kubelet fails to run with swap enabled:
sudo swapoff -a sudo sed -i '/ swap / s/^/#/' /etc/fstab
These commands disable swap now and remove swap from fstab to keep it off after reboot.
Verify installation: Check versions to confirm success:
kubelet --version kubeadm version kubectl version --client
All should report the version (e.g. v1.33.x for kubelet/kubeadm). Start and enable kubelet if needed:
sudo systemctl enable --now kubelet sudo systemctl status kubelet
At this point, each node has a container runtime and Kubernetes tools installed. Linux nodes are ready to join the cluster. Windows nodes will require a bit more setup (see below).
Initializing the Control Plane (Master Node)
Choose one machine as the control plane (master). On that node, ensure IP forwarding is enabled (so pods can communicate across hosts):
sudo sysctl -w net.ipv4.ip_forward=1
echo 'net.ipv4.ip_forward=1' | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
This makes sure the kernel forwards container traffic.
Then run kubeadm init
to bootstrap the cluster. For example:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
Here --pod-network-cidr
specifies the IP range for pod networking (example uses Flannel’s default 10.244.0.0/16). You can adjust it for other CNI plugins. After running this, kubeadm will pull images and set up the control plane components. If it warns about a newer remote version (as in some 2025 setups) you can ignore it or specify --kubernetes-version
explicitly.
When kubeadm init
completes successfully, it will output a kubeadm join ...
command (with token and cert hash) - save this command, as you will need it to add each worker node. For example, the output includes something like:
kubeadm join 192.168.1.100:6443 --token abc.def --discovery-token-ca-cert-hash sha256:......
Also follow the post-install instructions it shows: set up your admin kubeconfig with:
mkdir -p $HOME/.kube
sudo cp /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Now kubectl
commands will work on this machine as the cluster admin.
Installing a Pod Network Add-on
The cluster isn’t fully functional until you install a pod network (CNI) so that pods can talk across nodes. Common choices are Flannel, Calico, or Weave. For simplicity, you can use Flannel or Calico with a single command. For example, on the master:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
This Flannel manifest will install a simple overlay network. If you prefer Calico, you might run something like kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
. Wait a minute and check kubectl get pods -n kube-system
to see that the CNI pods become Running
. Once the network is up, the master is ready to accept worker nodes.
Joining Linux Worker Nodes
On each Linux worker node (Ubuntu or Fedora), repeat the container runtime and Kubernetes install steps (install containerd or Docker, install kubeadm/kubelet/kubectl, disable swap, etc). Then run the kubeadm join
command that was output by the master. It will look like:
sudo kubeadm join <master-ip>:6443 --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Replace <master-ip>
, <token>
, and <hash>
with the values from the master’s output. This will connect the worker to the cluster. After a minute, check from the master:
kubectl get nodes
You should see the master and each worker listed with status Ready
. If not, check logs on the node (sudo journalctl -u kubelet
) for errors (often due to networking, swap, or version mismatches).
Adding a Windows Worker Node (Optional)
Kubernetes supports Windows worker nodes (not Windows masters). To add a Windows node, you need a Windows Server 2022 (or newer) machine with the Containers feature. The official docs outline the steps kubernetes.io. In summary:
Ensure the Windows node has a static IP and name, and has network access to the Linux control plane.
On the Windows machine, open PowerShell as Administrator. Download and run the Windows scripts from sigs.k8s.io to install containerd and kubeadm/Kubelet. For example:
# Install Containerd (replace version as needed) Invoke-WebRequest -Uri https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/Install-Containerd.ps1 -OutFile Install-Containerd.ps1 .\Install-Containerd.ps1 -ContainerDVersion 1.7.22 # Prepare the node for Kubernetes (Kubernetes version should match the cluster) Invoke-WebRequest -Uri https://raw.githubusercontent.com/kubernetes-sigs/sig-windows-tools/master/hostprocess/PrepareNode.ps1 -OutFile PrepareNode.ps1 .\PrepareNode.ps1 -KubernetesVersion v1.33.2
These scripts install and configure the Windows container runtime and kubelet kubernetes.io kubernetes.io.
The scripts also configure the Windows node with the join command. Run the
kubeadm join
on the Windows node just as on Linux (you get the token from the master’s init output).
Once complete, the Windows node should appear as a Ready
node in kubectl get nodes
. You can then schedule Windows-based pods (with Windows container images) onto it.
Verifying the Cluster and Deploying a Sample App
With nodes joined, verify everything:
Check nodes and system pods:
kubectl get nodes kubectl get pods -n kube-system
All nodes should be
Ready
and core system pods (kube-proxy
,coredns
, CNI pods) should be running.Test deploying an application. For example, deploy a simple NGINX service:
kubectl create deployment hello-web --image=nginx kubectl expose deployment hello-web --port=80 --type=NodePort
Then run
kubectl get pods,svc
to see the app running. You can also trykubectl exec
into the pod orcurl
theNodePort
from another host to test connectivity.
If all works, congratulations: you have a working Kubernetes cluster on bare metal.
Tips and Next Steps
Use a Load Balancer: For high availability, you can add multiple control-plane nodes and put a load balancer (or software like MetalLB) in front of port 6443.
Set up DNS and kubeconfig: Copy the
admin.conf
to all admin machines so you can runkubectl
remotely.Monitoring: Install monitoring (Prometheus/Grafana) to watch cluster health.
Backups: Regularly back up etcd (cluster state) since there are no VM snapshots on bare metal.
Upgrades: When a new Kubernetes version is out, use
kubeadm upgrade
and upgrade kubelet on each node in a controlled fashion.
Running Kubernetes on bare metal gives you performance and control, but also means you manage the entire stack (no cloud convenience). Follow best practices, keep systems updated, and regularly test your backup and recovery processes. With that, you’ve set up Kubernetes from scratch on physical servers - a powerful foundation for reliable, high-performance container workloads.
🔖Thanks for Reading! Please Like, Share and Follow. Your support matters.
Subscribe to my newsletter
Read articles from Prakhar Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Prakhar Singh
Prakhar Singh
Passionate about AI, Cloud, Open Source & Education.