Kubernetes on Apple MacBooks (M Series)

Aditya SamantAditya Samant
14 min read

There are many options to provision a local Kubernetes cluster on your laptop. The most popular ones are minikube, kind, K3s and MicroK8s. These options provide a simple and fast way to get Kubernetes running on your laptop by abstracting the complexities within the Kubernetes control plane.

Kubeadm is a tool that facilitates provisioning Kubernetes clusters on virtual machines. It can provision a multi-node Kubernetes cluster for development or production purposes. It can provision clusters on your local laptop, on-premise cloud or public cloud. A cluster provisioned by kubeadm is a great way for Kubernetes administrators to have a playground to work with. It is also useful for people pursuing the CKA and CKS certifications to practice tasks like cluster upgrade and troubleshooting.

VirtualBox is by far the most popular tool to spin up virtual machines (VMs) on a personal laptop. VirtualBox supports virtualization for x86 and AMD64 CPU architectures.

In 2020, Apple introduced the M series of MacBooks which use the Apple Silicon chip, based on ARM64 CPU architecture. VirtualBox does not have good support for machines based on ARM64 (a developer preview version exists, which cannot be relied on). As the M series MacBooks have gained popularity, it is important to look for an alternative virtualization tool that is tested and certified for ARM64. Enter Multipass by Canonical, a simple virtualization tool that is fully compatible with ARM64 based machines.

This article is a step-by-step walkthrough on how to install a Kubernetes cluster on a MacBook (M series) laptop using the kubeadm tool. It is a simplification of the steps in the official Kubernetes documentation.

Pre-requisites

  • A MacBook laptop (M series) with minimum 16 GB RAM (recommended).

  • Multipass by Canonical should be installed as per the instructions for macOS. After installation, verify that you are able to launch a sample Ubuntu instance. Cleanup the instance after verification.

  • Your account on your MacBook must have admin privileges and be able to use sudo.

Provision the VMs

We will create 3 VMs for our setup as follows:

  • kubemaster: The controlplane node

  • kubeworker01: The first worker node

  • kubeworker02: The second worker node

Each VM will have the following configuration (you can choose to edit it as per your host machine capacity)

  • Disk space: 10G

  • Memory 3G

  • CPUs 2

๐Ÿ’ก
In Multipass, by default, the IP address allocated to a VM is subject to change after a reboot of the VM. If IP addresses change over reboots, it breaks the Kubernetes cluster. As such, it is imperative that the VMs are provisioned with a static IP address as documented here.

Provisioning the controlplane instance (kubemaster)

Launch thekubemaster instance with a manual network

๐Ÿ—’
The values to the --network option need to be passed carefully.
๐Ÿ—’
name=en0: This is the name of the Wi-Fi network on your host machine. To get a list of possible values, use the command multipass networks. mac="52:54:00:4b:ab:cd": A unique and random MAC address that will be allocated to the instance.
multipass launch --disk 10G --memory 3G --cpus 2 --name kubemaster --network name=en0,mode=manual,mac="52:54:00:4b:ab:cd" jammy

You should see the following output:

Launched: kubemaster

Configure the extra interface

The macaddress field should contain the exact MAC address chosen in the multipass launch command.
The addresses field should contain the static IP address that will be allocated to this VM. The static IP address should be in the same subnet as the original IP address of the instance.
The original IP address allocated to the VM can be found by the multipass info kubemaster command as shown below:

multipass info kubemaster | grep IPv4

You should see an output similar to:
IPv4: 192.168.73.7
In this example, the original IP address of the instance is 192.168.73.7. So the static IP address can be chosen as 192.168.73.101

Execute the command shown below

multipass exec -n kubemaster -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
  version: 2
  ethernets:
    extra0:
      dhcp4: no
      match:
        macaddress: "52:54:00:4b:ab:cd"
      addresses: [192.168.73.101/24]
EOF'

Apply the new configuration

multipass exec -n kubemaster -- sudo netplan apply
๐Ÿ—’
In case you receive a warning stating that the permissions are too open, please ignore it.

Confirm that it works

multipass info kubemaster | grep IPv4 -A1

You should see an output displaying both the original IP address and the static IP address:

IPv4:           192.168.73.7
                192.168.73.101

Let's test the network connectivity using the ping command:
Example:
Original IP of the instance: 192.168.73.7
Static IP of the instance: 192.168.73.101
IP of the host laptop: 192.168.0.2

All the commands below should return a successful output:

# Ping from local to the original IP address of kubemaster
ping 192.168.73.7

# Ping from local to the static IP address of kubemaster
ping 192.168.73.101

# Ping from kubemaster to local
multipass exec -n kubemaster -- ping 192.168.0.2

Provisioning the first worker node (kubeworker01)

โ—
The MAC address and static IP address chosen must be different from the ones allocated to the kubemaster instance.

Launch thekubeworker01 instance with a manual network

multipass launch --disk 10G --memory 3G --cpus 2 --name kubeworker01 --network name=en0,mode=manual,mac="52:54:00:4b:ba:dc" jammy

Configure the extra interface, similar to the steps performed forkubemaster

multipass exec -n kubeworker01 -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
  version: 2
  ethernets:
    extra0:
      dhcp4: no
      match:
        macaddress: "52:54:00:4b:ba:dc"
      addresses: [192.168.73.102/24]
EOF'

Apply the new configuration

multipass exec -n kubeworker01 -- sudo netplan apply

Test using ping similar to the steps followed for kubemaster.
Additionally, test that ping from kubemaster to kubeworker01 and vice versa is working.

# Ping from local to the original IP address of kubeworker01
ping 192.168.73.8

# Ping from local to the static IP address of kubeworker01
ping 192.168.73.102

# Ping from kubeworker01 to local
multipass exec -n kubeworker01 -- ping 192.168.0.2

# Ping from kubeworker01 to kubemaster
multipass exec -n kubeworker01 -- ping 192.168.73.101

# Ping from kubemaster to kubeworker01
multipass exec -n kubemaster -- ping 192.168.73.102

Provisioning the second worker node (kubeworker02)

โ—
The MAC address and static IP address chosen must be different from the ones allocated to the kubemaster and kubeworker01 instances.

Launch thekubeworker02 instance with a manual network

multipass launch --disk 10G --memory 3G --cpus 2 --name kubeworker02 --network name=en0,mode=manual,mac="52:54:00:4b:cd:ab" jammy

Configure the extra interface, similar to the steps performed forkubemaster

multipass exec -n kubeworker02 -- sudo bash -c 'cat << EOF > /etc/netplan/10-custom.yaml
network:
  version: 2
  ethernets:
    extra0:
      dhcp4: no
      match:
        macaddress: "52:54:00:4b:cd:ab"
      addresses: [192.168.73.103/24]
EOF'

Apply the new configuration

multipass exec -n kubeworker02 -- sudo netplan apply

Test using ping similar to the steps followed for kubemaster.

Additionally, test that all 3 VMs are able to ping each other successfully through their static IPs.

# Ping from local to the original IP address of kubeworker02
ping 192.168.73.9

# Ping from local to the static IP address of kubeworker02
ping 192.168.73.103

# Ping from kubeworker02 to local
multipass exec -n kubeworker02 -- ping 192.168.0.2

# Ping from kubeworker02 to kubemaster
multipass exec -n kubeworker02 -- ping 192.168.73.101

# Ping from kubeworker02 to kubeworker01
multipass exec -n kubeworker02 -- ping 192.168.73.102

# Ping from kubemaster to kubeworker02
multipass exec -n kubemaster -- ping 192.168.73.103

# Ping from kubeworker01 to kubeworker02
multipass exec -n kubeworker01 -- ping 192.168.73.103

Configure the local DNS

SSH into the machines through three separate terminal tabs by using themultipass shell command

multipass shell kubemaster
multipass shell kubeworker01
multipass shell kubeworker02

Edit the/etc/hosts file for all 3 VMs

Enter the following configuration in the /etc/hosts file of each VM:

๐Ÿ—’
Use the static IP addresses chosen for each VM instance.
sudo vi /etc/hosts
#<static IP> <hostname>
192.168.73.101 kubemaster
192.168.73.102 kubeworker01
192.168.73.103 kubeworker02

Install Kubernetes

Now that we have a perfect set of VMs up and running, it is time to proceed toward the Kubernetes installation.

Versions

The below versions are used in this lab.

Software / PackageVersionLocation
containerd1.7.14releases
runc1.1.12releases
CNI plugin1.4.1releases
kubeadm1.29.3apt-get
kubelet1.29.3apt-get
kubectl1.29.3apt-get
๐Ÿ—’
All commands mentioned below need to be executed from within the terminal of the VMs.

Install and configure prerequisites

Forwarding IPv4 and letting iptables see bridged traffic

Execute the below set of commands onkubemaster, kubeworker01 and kubeworker02

cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
overlay
br_netfilter
EOF

sudo modprobe overlay
sudo modprobe br_netfilter

# sysctl params required by setup, params persist across reboots
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables  = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward                 = 1
EOF

# Apply sysctl params without reboot
sudo sysctl --system

# Verify that the br_netfilter, overlay modules are loaded by running the following commands:
lsmod | grep br_netfilter
lsmod | grep overlay

#Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and net.ipv4.ip_forward system variables are set to 1 in your sysctl config by running the following command:
sysctl net.bridge.bridge-nf-call-iptables net.bridge.bridge-nf-call-ip6tables net.ipv4.ip_forward

Verify that the net.bridge.bridge-nf-call-iptables, net.bridge.bridge-nf-call-ip6tables, and net.ipv4.ip_forward system variables are set to 1 in your sysctl config.

net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.ipv4.ip_forward = 1
๐Ÿ—’
For all the packages to be installed in this tutorial, ensure to use the arm64 variant only.

Install a Container Runtime

You need to install a container runtime into each node in the cluster so that Pods can run there.

Step 1: Install containerd

Execute the below commands on all 3 nodes

curl -LO https://github.com/containerd/containerd/releases/download/v1.7.14/containerd-1.7.14-linux-arm64.tar.gz

sudo tar Cxzvf /usr/local containerd-1.7.14-linux-arm64.tar.gz

curl -LO https://raw.githubusercontent.com/containerd/containerd/main/containerd.service

sudo mkdir -p /usr/local/lib/systemd/system/
sudo mv containerd.service /usr/local/lib/systemd/system/

sudo mkdir -p /etc/containerd/
sudo containerd config default | sudo tee /etc/containerd/config.toml > /dev/null

sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/g' /etc/containerd/config.toml

sudo systemctl daemon-reload
sudo systemctl enable --now containerd

#Check that containerd service is up and running
systemctl status containerd

Verify that the output shows the containerd service up and running:

โ— containerd.service - containerd container runtime
     Loaded: loaded (/usr/local/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
     Active: active (running) since Tue 2024-03-26 11:15:20 IST; 5ms ago

Step 2: Install runc

Execute the below commands on all 3 nodes

curl -LO https://github.com/opencontainers/runc/releases/download/v1.1.12/runc.arm64

sudo install -m 755 runc.arm64 /usr/local/sbin/runc

Step 3: Install CNI plugins

Execute the below commands on all 3 nodes

curl -LO https://github.com/containernetworking/plugins/releases/download/v1.4.1/cni-plugins-linux-arm64-v1.4.1.tgz
sudo mkdir -p /opt/cni/bin
sudo tar Cxzvf /opt/cni/bin cni-plugins-linux-arm64-v1.4.1.tgz

Install kubeadm, kubelet and kubectl

Execute the below commands on all 3 nodes

sudo apt-get update
sudo apt-get install -y apt-transport-https ca-certificates curl gpg
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Verify the installation using the below commands:

kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"29", GitVersion:"v1.29.3", GitCommit:"6813625b7cd706db5bc7388921be03071e1a492d", GitTreeState:"clean", BuildDate:"2024-03-15T00:06:16Z", GoVersion:"go1.21.8", Compiler:"gc", Platform:"linux/arm64"}
kubelet --version
Kubernetes v1.29.3
kubectl version --client
Client Version: v1.29.3
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3

Configure crictl to work with containerd

Execute the below commands on all 3 nodes

sudo crictl config runtime-endpoint unix:///var/run/containerd/containerd.sock

Initializing the controlplane node

โ—
Commands for initializing the controlplane node should be executed on kubemaster only.

Execute the below command onkubemaster

โ—
apiserver-advertise-address must be the exact value of the static IP allocated to kubemaster.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.73.101

If the command runs successfully, you should see the message 'Your Kubernetes control-plane has initialized successfully!'

๐Ÿ’ก
Save the entire kubeadm join command, which is printed on the output. This will be used when the worker nodes are ready to be connected to the cluster.

To make kubectl work for your non-root user, execute the below command on kubemaster:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Verify that you are able to reach the cluster through kubectl:

Execute the below command onkubemaster

kubectl -n kube-system get pods
๐Ÿ—’
The coredns pods will not be Ready at this stage. This is as expected, as we have not deployed the Pod network add-on yet.
NAME                                 READY   STATUS    RESTARTS      AGE
coredns-76f75df574-269qf             1/1     Pending                
coredns-76f75df574-6mcvd             1/1     Pending                
etcd-kubemaster                      1/1     Running   0             1m1s
kube-apiserver-kubemaster            1/1     Running   0             1m1s
kube-controller-manager-kubemaster   1/1     Running   0             1m1s
kube-proxy-7qfgq                     1/1     Running   0             1m1s
kube-scheduler-kubemaster            1/1     Running   0             1m1s

Install a Pod network add-on

You must deploy a Container Network Interface (CNI) based Pod network add-on so that your Pods can communicate with each other. Cluster DNS (CoreDNS) will not start up before a network is installed.

A list of all compatible Pod network add-ons can be found here.

In this lab, we will use Weave Net

Execute the below command onkubemaster

kubectl apply -f https://reweave.azurewebsites.net/k8s/v1.28/net.yaml

It will take up to a minute for the weave pod to be ready.

โ—
At this point, the controlplane node should be ready with all pods in the kube-system namespace up and running. Please validate this to confirm the sanity of the controlplane.
kubectl -n kube-system get pods
NAME                                 READY   STATUS    RESTARTS      AGE
coredns-76f75df574-269qf             1/1     Running   0             3m16s
coredns-76f75df574-6mcvd             1/1     Running   0             3m16s
etcd-kubemaster                      1/1     Running   0             3m32s
kube-apiserver-kubemaster            1/1     Running   0             3m32s
kube-controller-manager-kubemaster   1/1     Running   0             3m32s
kube-proxy-7qfgq                     1/1     Running   0             3m16s
kube-scheduler-kubemaster            1/1     Running   0             3m33s
weave-net-mvld4                      2/2     Running   1 (23s ago)   40s

Join the worker nodes to the cluster

Connect to each worker node and run the entire kubeadm join command that was copied earlier from the output of the kubeadm init command.

Sample command to be executed onkubeworker01 and kubeworker02

sudo kubeadm join 192.168.73.101:6443 --token tn082a..... \
--discovery-token-ca-cert-hash sha256:c1b0143a.....
๐Ÿ’ก
If you missed making a note of the kubeadm join command earlier, you can generate a new token by using the below command on the controlplane and use it instead.
kubeadm token create --print-join-command

After a few seconds, check that all nodes have joined the cluster and are in a Ready state.

Execute the below command onkubemaster

kubectl get nodes

Validation

Validate that the Kubernetes setup is working correctly by deploying a nginx pod on the cluster.

Execute the below command onkubemaster

kubectl run test-nginx --image=nginx
kubectl get pod test-nginx
NAME         READY   STATUS    RESTARTS   AGE
test-nginx   1/1     Running   0          47s

Once the pod is in a Ready state, then it's time to say Congratulations! You've just built a fully functioning 3 node Kubernetes cluster on a M series MacBook.

Backup and Restore

Multipass offers an easy and effective way to take a backup of the controlplane and worker nodes. Using this backup, a corrupt Kubernetes cluster can be restored to a previous working state.

Backup

In order to perform a backup, use the snapshot feature offered by multipass.

Execute the below commands on a local terminal

Stop the VMs

multipass stop kubeworker02
multipass stop kubeworker01
multipass stop kubemaster

Verify that the VMs are stopped

multipass list
Name                    State             IPv4             Image
kubemaster              Stopped           --               Ubuntu 22.04 LTS
kubeworker01            Stopped           --               Ubuntu 22.04 LTS
kubeworker02            Stopped           --               Ubuntu 22.04 LTS

Capture a snapshot

multipass snapshot kubemaster
multipass snapshot kubeworker01
multipass snapshot kubeworker02

Verify that the snapshots are present

multipass list --snapshots
Instance       Snapshot    Parent   Comment
kubemaster     snapshot1   --       --
kubeworker01   snapshot1   --       --
kubeworker02   snapshot1   --       --

Restore

In order to restore from a backup, use the restore command

๐Ÿ’ก
Substitute x with the number of the snapshot.
multipass restore kubemaster.snapshotx
multipass restore kubeworker01.snapshotx
multipass restore kubeworker02.snapshotx

Cleanup

In order to clean up the cluster, delete the multipass VMs using the below commands:

The delete command performs a soft deletion of the VMs. In other words, it moved the VMs to the recycle bin.

multipass delete kubeworker02
multipass delete kubeworker01
multipass delete kubemaster

Verify the deletion using the following command:

multipass list
Name                    State             IPv4             Image
kubemaster              Deleted           --               Ubuntu 22.04 LTS
kubeworker01            Deleted           --               Ubuntu 22.04 LTS
kubeworker02            Deleted           --               Ubuntu 22.04 LTS

In order to recover the deleted clusters, use the recover command:

multipass recover kubemaster
multipass recover kubeworker01
multipass recover kubeworker02

In order to permanently delete the VMs, the delete command should be followed by the purge command:

multipass delete kubeworker02
multipass delete kubeworker01
multipass delete kubemaster
multipass purge
โš 
Purging an instance also deletes all the snapshots associated with this instance. In other words, the VMs cannot be recovered after being purged.

Resources

Here are the links to the resources referred in this blog post:

8
Subscribe to my newsletter

Read articles from Aditya Samant directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Aditya Samant
Aditya Samant

With a background in computer science and nearly two decades of experience in the industry, Aditya is enthusiastic about solving complex problems and staying up-to-date with the latest technologies. He has achieved the CKAD, CKA, CKS certifications in Kubernetes along with the AWS CLF-C02 and SAA-C03 certifications.โ€‹ He loves to share his knowledge through blogs, articles, videos and courses. He thrives on challenges and enjoys exploring new opportunities in the world of microservices and cloud-native technologies, with a particular emphasis on Kubernetes. Aditya is a member of the Kubernetes GitHub organization and actively contributes to the documentation for Kubernetes.