Kubernetes with microk8s


Introduction
There are many ways to deploy Kubernetes (k8s) clusters locally. It's similar to starting with Linux; first, you choose a distribution, then explore the different versions, packages, and customizations.
You can begin with a desktop app like Docker or Podman, which integrate well with k8s. Most IDEs have built-in support or plugins for working with k8s.
Once you move beyond the basics, you may face some limitations and inconveniences. If you're running the cluster on a laptop, you'll need reliable pause and resume features. It can be frustrating if you resume the cluster and the services are not healthy. You can always reset or reinstall, so choose your distributions carefully.
For this series, I'm sticking with Canonical solutions because Microk8s and MicroCeph have worked well. Integrating these with other tools wasn't straightforward. I'm a learner, not an expert in this area.
Preprare Infrastructure
The goal is to use cloud-native storage. This requires raw disks that will be used by Ceph. So, instead of using containers, we use VMs as nodes to create the cluster. To enable high availability, we will deploy a multi node cluster. We will start with preparing VMs to create the multi node k8s cluster.
Create Ubuntu VMs
To deploy our cluster, we need to create multiple virtual machines. For these example, we will use Ubuntu VMs running on VMware Fusion:
Prerequisites
A desktop hypervisor like VMware Fusion or Workstation. Steps below are using Fusion on a Macbook.
Ubuntu Server LTS ISO. The example uses MacBook M1, so images are arm based.
At least 50GB free disk space per VM. The disk are thin provisioned but sufficient disk space is recommended.
Basic experience with OS installation. We are using Ubuntu server version, which does not have a desktop environment.
Creating Virtual Machines
Launch VMware Fusion and click "+" to create a new virtual machine
Drag and drop the Ubuntu Server ISO or click "Create a custom virtual machine"
Select "Linux" and "Ubuntu 64-bit" as the operating system
Configure VM Resources for each node:
CPUs: 4 cores minimum
Memory: 8GB minimum
Storage:
Primary disk 50GB minimum
Additional 20 GB disk to be consumed by Ceph (covered in another post)
Network: Bridge or NAT networking
Complete the Ubuntu installation process:
Choose "Install Ubuntu Server"
Select language and keyboard layout
Configure network settings (preferably static IP)
Set up username and password
Install OpenSSH server when prompted
Post-Installation Setup
If you plan to enable Ceph on the cluster, ensure you have additional disks:
manas@manas-s01:~$ lsblk | grep -v loop
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sr0 11:0 1 2.7G 0 rom
nvme0n1 259:0 0 80G 0 disk
├─nvme0n1p1 259:1 0 1G 0 part /boot/efi
├─nvme0n1p2 259:2 0 2G 0 part /boot
└─nvme0n1p3 259:3 0 76.9G 0 part
└─ubuntu--vg-ubuntu--lv 252:0 0 38.5G 0 lvm /
nvme0n2 259:4 0 20G 0 disk
As we are using Ubuntu Server, only remote console or SSH is available.
# Update the system
sudo apt update && sudo apt upgrade -y
# Optional package to support copy/paste etc.
sudo apt install -y open-vm-tools
Optional: Assign a static IP to the VM. You can use a netplan config like below:
network:
version: 2
renderer: networkd
ethernets:
ens160:
dhcp4: no
addresses:
- <ip>/24
routes:
- to: default
via: <gateway>
nameservers:
addresses: [1.1.1.1,8.8.8.8]
Refer to Ubuntu network configuration docs for details
Clone VMs
We need to create at least three VMs for the cluster. There are ways to automate this step with packer, ansible, etc. However, we can create Linked Clone which is a cool feature:
A linked clone is a VMware virtual machine that shares the virtual disk of the source virtual machine.
First shutdown the master VM,
Then from the VM Library, right click and create a Linked Clone
Set a unique hostname and IP and reboot
Repeat the steps 2 and 3 for the third VM
Once the VMs are up, make sure to give each VM a unique hostname and IP address:
hostnamectl set <hostname>
# Reboot if required. Ensure each VM has unique name and IP.
# Change the netplan config:
# https://documentation.ubuntu.com/server/explanation/networking/configuring-networks
sudo netplan apply
# Reboot
sudo reboot
Now, to be able to run commands, it is better to setup password less SSH
$ ssh-copy-id <username>@<hostame>
# Enter the passphrase and password when prompted
# Repeat for all the 3 VMs
Initialize the k8s cluster
Start mircok8s on the master node and then run add-node
from the other nodes:
In some cases, k8s may fail to start with a missing file error: microk8s/issues/4361
As a workaround, create the missing file and restart k8s:
# Install
sudo snap install microk8s --classic --channel=1.32
# Setup User permissions
sudo usermod -a -G microk8s $USER && \
mkdir -p ~/.kube && \
chmod 0700 ~/.kube
# Check status (start if required)
microk8s status
# In case of failure to start, use inspect
microk8s inspect
# Workaround to missing file error
sudo touch /var/snap/microk8s/8147/var/kubernetes/backend/localnode.yaml
Check the node status from kubectl
manas@manas-s01:~$ microk8s status
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
enabled:
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
disabled:
cert-manager # (core) Cloud native certificate management
cis-hardening # (core) Apply CIS K8s hardening
community # (core) The community addons repository
dashboard # (core) The Kubernetes dashboard
host-access # (core) Allow Pods connecting to Host services smoothly
hostpath-storage # (core) Storage class; allocates storage from host directory
ingress # (core) Ingress controller for external access
kube-ovn # (core) An advanced network fabric for Kubernetes
mayastor # (core) OpenEBS MayaStor
metallb # (core) Loadbalancer for your Kubernetes cluster
metrics-server # (core) K8s Metrics Server for API access to service metrics
minio # (core) MinIO object storage
observability # (core) A lightweight observability stack for logs, traces and metrics
prometheus # (core) Prometheus operator for monitoring and logging
rbac # (core) Role-Based Access Control for authorisation
registry # (core) Private image registry exposed on localhost:32000
rook-ceph # (core) Distributed Ceph storage using Rook
storage # (core) Alias to hostpath-storage add-on, deprecated
manas@manas-s01:~$ microk8s kubectl get no
NAME STATUS ROLES AGE VERSION
manas-s01 Ready <none> 22m v1.32.3
Add other nodes to the cluster
manas@manas-s01:~$ microk8s add-node
From the node you wish to join to this cluster, run the following:
microk8s join 192.168.148.134:25000/...
Use the '--worker' flag to join a node as a worker not running the control plane, eg:
microk8s join 192.168.148.134:25000/... --worker
If the node you are adding is not reachable through the default interface you can use one of the following:
microk8s join 192.168.148.134:25000/.../...
microk8s join 172.17.0.1:25000/.../...
From the nodes, join the cluster by pasting the command from above.
manas@manas-s02:~$ microk8s join 192.168.148.134:25000/.../...
Contacting cluster at 192.168.148.134
Waiting for this node to finish joining the cluster. .. .. .. .. .. .. .. .. .. ..
Successfully joined the cluster.
From the Master Node, ensure k8s is up and running. We should have HA enabled with 3 nodes.
manas@manas-s01:~$ microk8s status
microk8s is running
high-availability: yes
datastore master nodes: 192.168.148.134:19001 192.168.148.136:19001 192.168.148.135:19001
datastore standby nodes: none
addons:
enabled:
dns # (core) CoreDNS
ha-cluster # (core) Configure high availability on the current node
helm # (core) Helm - the package manager for Kubernetes
helm3 # (core) Helm 3 - the package manager for Kubernetes
disabled:
cert-manager # (core) Cloud native certificate management
cis-hardening # (core) Apply CIS K8s hardening
community # (core) The community addons repository
dashboard # (core) The Kubernetes dashboard
host-access # (core) Allow Pods connecting to Host services smoothly
hostpath-storage # (core) Storage class; allocates storage from host directory
ingress # (core) Ingress controller for external access
kube-ovn # (core) An advanced network fabric for Kubernetes
mayastor # (core) OpenEBS MayaStor
metallb # (core) Loadbalancer for your Kubernetes cluster
metrics-server # (core) K8s Metrics Server for API access to service metrics
minio # (core) MinIO object storage
observability # (core) A lightweight observability stack for logs, traces and metrics
prometheus # (core) Prometheus operator for monitoring and logging
rbac # (core) Role-Based Access Control for authorisation
registry # (core) Private image registry exposed on localhost:32000
rook-ceph # (core) Distributed Ceph storage using Rook
storage # (core) Alias to hostpath-storage add-on, deprecated
Once all nodes have joined, we can see the following output:
manas@manas-s01:~$ sudo microk8s kubectl get no
NAME STATUS ROLES AGE VERSION
manas-s01 Ready <none> 48m v1.32.3
manas-s02 Ready <none> 4m39s v1.32.3
manas-s03 Ready <none> 4m26s v1.32.3
You can see all the resources that are running
manas@manas-s01:~$ microk8s kubectl get all --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/calico-kube-controllers-5947598c79-z6wcb 1/1 Running 0 51m
kube-system pod/calico-node-f2jv7 1/1 Running 0 29m
kube-system pod/calico-node-jdmvj 1/1 Running 0 7m56s
kube-system pod/calico-node-lfbqs 1/1 Running 0 7m43s
kube-system pod/coredns-79b94494c7-k98hm 1/1 Running 0 51m
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 51m
kube-system service/kube-dns ClusterIP 10.152.183.10 <none> 53/UDP,53/TCP,9153/TCP 51m
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/calico-node 3 3 3 3 3 kubernetes.io/os=linux 51m
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/calico-kube-controllers 1/1 1 1 51m
kube-system deployment.apps/coredns 1/1 1 1 51m
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/calico-kube-controllers-5947598c79 1 1 1 51m
kube-system replicaset.apps/coredns-79b94494c7 1 1 1 51m
Congratulations you have a multi node k8s running!
Please note that microk8s
commands are different from other k8s CLIs. As this is a compliant k8s cluster, other tools should also work. The installation is snap based, so config and log files are under the snap directory. Refer to https://microk8s.io/docs/command-reference
Subscribe to my newsletter
Read articles from Manas Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Manas Singh
Manas Singh
14+ Years in Enterprise Storage & Virtualization | Python Test Automation | Leading Quality Engineering at Scale