Kubernetes and Containerlab: Part 1 – Building a Cluster

Jeffrey LyonJeffrey Lyon
19 min read

Intro

Hello again to all my longtime readers 😉, and welcome to my new series where we’ll dive into the world of Kubernetes and Containerlab (creators named this Clabernetes) to help you build containerized virtual network environments from the ground up. As a former network engineer, my goal with this series is to help those with little to no Kubernetes experience get containerized labs up and running quickly, using minimal resources. Advanced platform engineers seeking a streamlined approach to deploying Kubernetes may also find these posts useful.

We’ll kick off this series by laying the foundation: building a small Kubernetes cluster. We'll use tools like Kubespray (Ansible) along with some of my custom playbooks to streamline the process. By the end of this post, we’ll transform three freshly deployed servers into a fully operational Kubernetes cluster.

Next, I’ll be configuring our existing cluster to deploy a shared MicroCeph storage pool, exploring its use cases, and transforming our cluster into a hyper-converged infrastructure.

From there, we'll explore Clabernetes, a Kubernetes based solution created by the makers of Containerlab. Clabernetes deploys Containerlab topologies into a Kubernetes cluster allowing them to scale beyond a single node. It’s designed to allow for some pretty robust networking labs, and I’ll show you how to create some example topologies. Each post in this series will build on the previous, providing practical, hands-on examples to guide you through every step.

By the end of this series, you’ll have a fully functional Kubernetes setup with advanced storage, networking, and simulation capabilities—perfect for learning, experimenting, or even scaling into production.

Let’s get started!

Scenario

Let’s imagine you’re a network engineer (maybe you are!) exploring alternatives to traditional virtual lab solutions. While tools like EVE-NG, GNS3, and Cisco’s CML are popular, they often struggle to scale efficiently. You want to build larger, more complex topologies to enhance both your day-to-day work and your networking knowledge. You've heard a lot about Containerlab recently and are eager to experiment, but you're unsure where to start and your resources are limited. Although you have extensive experience with physical and virtual network setups, container-based environments—especially Kubernetes—are new territory. You've heard about the scalability and flexibility that Kubernetes and Containerlab offer for creating virtual labs and testing advanced topologies, and you want to integrate that power into your workflow.

Let’s begin by discussing the devices used in this post to build the cluster. All of them are lower-spec virtual machines running Rocky Linux 9.4. My goal is to demonstrate the power of Kubernetes and Containerlab, even when deployed with minimal, cost-effective resources. There are four devices in total: one server dedicated to Ansible and three Kubernetes nodes. The server specifications are as follows:

  • Ansible Host - 1 CPU core, 4Gb RAM, 50GB OS HD

  • 3x Kubernetes Hosts - 2 CPU cores, 16Gb RAM, 50GB OS HD, 250GB HD for microCEPH storage pool (covered in Part 2 of this series)

These systems will be communicating over the same 172.16.99.x MGMT network.

NOTE: 1 host is dedicated as the control plane node, and also the etcd server. All three, even the control plane node, will be set as worker nodes and can take on workloads.

NOTE: The kubespray automation in this setup is using all other defaults and no additional plugins will be installed. I may add sections to this post or separate posts in the future covering use of additional plugins.

Requirements

I will include required packages, configuration, and setup for the systems involved in this automation.

NOTE: Unless specified I am working as a non-root user (“jeff”) and in my home directory.

Ansible host

You will need the following:

  • Update OS

      sudo dnf update -y
    
  • Python (3.9 or greater suggested)

    Default on Rocky 9 is Python 3.9. This setup is using 3.10.9. Python installation on Rocky is a little more involved. I’ve included the steps on how to do so below:

    Install Dependencies

      dnf install tar curl gcc openssl-devel bzip2-devel libffi-devel zlib-devel wget make -y
    

    Install Python

      # Download and unzip
      wget https://www.python.org/ftp/python/3.10.9/Python-3.10.9.tar.xz
      tar -xf Python-3.10.9.tar.xz
      # Change directory and configure Python
      cd Python-3.10.9
      ./configure --enable-optimizations
      # Start and complete the build process
      make -j 2
      nproc
      # Install Python
      make altinstall
      # Verify install using
      python3.10 --version
    
  • Python Virtual Environment

    I suggest using a virtual environment for this setup. Makes it easier to keep ansible, its modules, and kubespray separate from anything else the host is being used for.

      # Create the virtual environment
      python3.10 -m venv kubespray_env
      # Activate the virtual environment
      source kubespray_env/bin/activate
    
      # To deactivate
      deactivate
    
  • Download Kubespray

      # Change into virtual environment directory
      cd kubespray_env
      # Pull down Kubespray from Github
      git clone https://github.com/kubernetes-sigs/kubespray.git
    
  • Install Ansible and Kubespray packages within the virtual environment

      # From within the virtual environment main folder
      cd kubespray
      # Install packages (Ansible mainly)
      pip3.10 install -r requirements.txt
    
  • Tweak Ansible configuration

    Modify your ansible.cfg file to ignore host_key_checking. Usually located in /etc/ansible/ Create a new file if none exists.

      [defaults]
      host_key_checking = False
    

    NOTE: If your unsure where to find your ansible.cfg, just run ansible --version as shown below:

      ansible --version
    
      ansible [core 2.16.3]
        config file = /etc/ansible/ansible.cfg
    
  • Download my custom kubespray-addons repository

      # Change directory to root virtual environment folder
      cd ~/kubespray_env
      # Pull down kubespray-addons from Github
      git clone https://github.com/leothelyon17/kubespray-addons.git
    

Kubernetes Nodes (freshly created VMs)

  • Upgrade OS (same as Ansible host)

      sudo dnf update -y
    
      # Optional
      sudo dnf install nano -y
    

    NOTE: I also include Nano text editor on these for quick file editing if needed. The default Python 3.9 included in the OS install works just fine.

Thats it! Automation takes care of everything else.

Getting into the Weeds

Automation Overview and Breakdown

We’ll start with a quick overview of Kubespray, then go over my custom add-on automation and what it strives to accomplish. Then we’ll go over a breakdown of both Addons playbooks—Pre and Post.

Kubespray

Kubespray is an open-source tool that automates the deployment of highly available Kubernetes clusters. It uses Ansible playbooks to install and configure Kubernetes across various environments, including bare-metal servers, virtual machines, or cloud infrastructures. Kubespray simplifies the deployment process, providing a robust, flexible, and scalable solution for setting up production-grade Kubernetes clusters.

Kubespray is undoubtedly a powerful tool. However, as I worked through various tutorials to get started, I noticed the number of steps required, such as setting up the inventory, configuring server settings, and addressing issues like fixing kubeadm on the control nodes once the cluster is up and running. This is something I felt needed to be addressed. This brings us to the next section…

Kubespray-Addons (Custom Automation)

I wanted to make using Kubespray and getting a K8s cluster up and running easier than it already is. This is especially true for my fellow network engineers who might be new to it all things Kubernetes, or just anyone that doesn’t want to spend the extra time messing with the additional setup required for running Kubespray.

The initial setup for Kubespray requires users to define environment variables, which are then passed into a Python script to generate the necessary inventory.yml file. This approach, outlined in the official Kubespray documentation and many online tutorials, produces an inventory file with numerous predefined defaults. However, users often still need to manually modify the Kubespray inventory file afterward. My goal was to create a more intuitive and streamlined solution—one that not only generates the required Kubespray inventory file but will also be used for the Addon playbooks as well.

Inventory - inventory.yml

Let’s breakdown the inventory file with an example:

---
all:
  hosts:
    rocky9-lab-node1:
      ansible_host: 172.16.99.25
      domain_name: jjland.local
      master_node: True
      worker_node: True
      etcd_node: True
    rocky9-lab-node2:
      ansible_host: 172.16.99.26
      domain_name: jjland.local
      master_node: False
      worker_node: True
      etcd_node: False
    rocky9-lab-node3:
      ansible_host: 172.16.99.27
      domain_name: jjland.local
      master_node: False
      worker_node: True
      etcd_node: False
    rocky9-lab-mgmt:
      ansible_host: 172.16.99.20
      domain_name: jjland.local

  children:
    k8s_nodes:
      hosts:
        rocky9-lab-node1:
        rocky9-lab-node2:
        rocky9-lab-node3:
    ansible_nodes:
      hosts:
        rocky9-lab-mgmt:

  vars:
    ansible_user: jeff

The file, which users need to customize, is based on the official Kubespray inventory file but with some key improvements. My version allows users to predefine the roles of each node—something the official method doesn’t provide. It also specifies individual host names, used not only in the Addons playbooks but also to properly name the Kubernetes nodes, instead of using the default 'node' from the Kubespray file. Additionally, it defines the domain name, used for updating the /etc/hosts file on all hosts during a task in the Pre-Kubespray playbook. It also sets the ansible_host variable for device connections and configures the ansible_user for all Addons playbook tasks.

Pre-Kubespray Playbook - pre-kubespray-setup-pb.yml

This playbook consists of two plays and is designed to fully prepare a set of hosts for Kubernetes deployment using Kubespray. It installs required Ansible collections, sets up SSH key-based authentication, modifies system configurations (disables swap, configures sysctl settings), ensures required kernel modules are loaded, and configures firewall rules.

Play 1 - Pre Kubespray Setup

---
- name: Pre Kubespray Setup
  hosts: all
  gather_facts: false

  tasks:

    - name: Install collections from requirements.yml
      ansible.builtin.command:
        cmd: ansible-galaxy collection install -r requirements.yml
      delegate_to: localhost
      run_once: true

    - name: Generate SSH key pair
      openssh_keypair:
        path: "/home/{{ ansible_user }}/.ssh/kubespray_ansible"
        type: rsa
        size: 2048
        state: present
        mode: '0600'
      register: ssh_keypair_result
      delegate_to: localhost
      run_once: true

    - name: Ensure the SSH public key is present on the remote host
      authorized_key:
        user: "{{ ansible_user }}"
        state: present
        key: "{{ lookup('file', '/home/{{ ansible_user }}/.ssh/kubespray_ansible.pub') }}"
      when: inventory_hostname not in groups['ansible_nodes']

    - name: Add entries to /etc/hosts
      become: true
      lineinfile:
        path: /etc/hosts
        state: present
        line: "{{ hostvars[item].ansible_host }} {{ hostvars[item].inventory_hostname }}.{{ hostvars[item].domain_name }} {{ hostvars[item].inventory_hostname }}"
        backup: yes
      loop: "{{ groups['all'] }}"
      loop_control:
        loop_var: item

Purpose:
This playbook ensures that all hosts are prepared for Kubespray by installing required Ansible collections, generating SSH keys, and configuring the environment.

Hosts:
Targets the ALL host unless specified in task.

Tasks:

  1. Install collections from requirements.yml

    Installs required Ansible collections from requirements.yml. This is only run once on the localhost. Right now, the only requirements are community.crypto and ansible.posix.

  2. Generate SSH key pair
    Generates an RSA SSH key pair for Ansible on the localhost for later access to remote hosts. The private key is stored in the .ssh directory under kubespray_ansible.

  3. Ensure the SSH public key is present on the remote host
    Adds the generated SSH public key to the remote hosts to allow passwordless access. It applies this only to hosts not in the ansible_nodes group.

    NOTE: Will still need ‘sudo’ password. This also allows for flexibility to add an additional task for passwordless sudo. May add this feature task later.

  4. Add entries to /etc/hosts
    Adds entries to the /etc/hosts file on each host to ensure proper DNS resolution between them. It loops through all hosts in the inventory and updates their hosts file with IP addresses and hostnames.

Play 2 - Build Kubespray inventory and additional k8s node setup

- name: Build Kubespray inventory and additional k8s node setup
  hosts: k8s_nodes
  gather_facts: false
  tasks:
    - name: Create inventory directory if it does not exist
      ansible.builtin.file:
        path: ../kubespray/inventory/
        state: directory
        mode: '0755'
      delegate_to: localhost
      run_once: true

    - name: Generate inventory.yml for kubespray using Jinja2
      template:
        src: ./templates/kubespray-inventory-yaml.j2
        dest: ./k8s-hosts.yml
      delegate_to: localhost

    - name: Copy completed template to kubespray inventory folder
      ansible.builtin.copy:
        src: ./k8s-hosts.yml
        dest: ../kubespray/inventory
        mode: '0755'
      delegate_to: localhost

    - name: Disable swap
      become: true
      ansible.builtin.command: swapoff -a

    - name: Remove swap entry from /etc/fstab
      become: true
      ansible.builtin.replace:
        path: /etc/fstab
        regexp: '(^.*swap.*$)'
        replace: '# \1'

    - name: Load necessary kernel modules
      become: true
      ansible.builtin.modprobe:
        name: "{{ item }}"
      loop:
        - br_netfilter
        - overlay

    - name: Ensure kernel modules are loaded on boot
      become: true
      ansible.builtin.copy:
        dest: /etc/modules-load.d/kubernetes.conf
        content: |
          br_netfilter
          overlay

    - name: Configure sysctl for Kubernetes networking
      become: true
      ansible.builtin.copy:
        dest: /etc/sysctl.d/kubernetes.conf
        content: |
          net.bridge.bridge-nf-call-ip6tables = 1
          net.bridge.bridge-nf-call-iptables = 1
          net.ipv4.ip_forward = 1
      notify:
        - Reload sysctl

    - name: Apply sysctl settings
      become: true
      ansible.builtin.command: sysctl --system

    - name: Configure firewall rules for Kubernetes
      become: true
      ansible.builtin.firewalld:
        service: "{{ item }}"
        permanent: yes
        state: enabled
        immediate: yes
      loop:
        - ssh
        - http
        - https
        - kube-api 
        - kube-apiserver
        - kube-control-plane
        - kube-control-plane-secure 
        - kube-controller-manager
        - kube-controller-manager-secure
        - kube-nodeport-services
        - kube-scheduler 
        - kube-scheduler-secure
        - kube-worker 
        - kubelet
        - kubelet-readonly 
        - kubelet-worker
        - etcd-server
      notify:
        - Reload firewalld

Purpose:
This playbook sets up the environment for Kubernetes and configures the nodes for a Kubespray deployment.

Hosts:
Targets the 3 Kubernetes nodes only.

Tasks:

  1. Create inventory directory if it does not exist
    Creates the inventory directory required by Kubespray to store the inventory.yml file.

  2. Generate inventory.yml for Kubespray using Jinja2
    Uses a Jinja2 template to create the inventory.yml file that Kubespray needs, based on the defined hosts.

  3. Copy completed template to Kubespray inventory folder
    Copies the generated inventory.yml file into the Kubespray directory for further use.

  4. Disable swap
    Disables swap on the hosts as required by Kubernetes.

  5. Remove swap entry from /etc/fstab
    Removes any entries related to swap in /etc/fstab to prevent it from re-enabling at boot.

  6. Load necessary kernel modules
    Loads required kernel modules (br_netfilter and overlay) for Kubernetes networking.

  7. Ensure kernel modules are loaded on boot
    Adds kernel modules to /etc/modules-load.d/kubernetes.conf to ensure they are loaded on boot.

  8. Configure sysctl for Kubernetes networking
    Configures sysctl settings to enable IP forwarding and ensure proper Kubernetes networking (net.bridge.bridge-nf-call-iptables, net.ipv4.ip_forward).

  9. Apply sysctl settings

    Applies the sysctl settings to ensure they are active immediately.

  10. Configure firewall rules for Kubernetes
    Configures firewalld to allow traffic on essential Kubernetes services like ssh, kube-api, and more.

Handlers

 handlers:
  - name: Reload firewalld
    become: true
    ansible.builtin.command: systemctl reload firewalld

  - name: Reload sysctl
    become: true
    ansible.builtin.command: sysctl --system

Purpose:
The handlers will reload the firewall and apply sysctl settings when triggered.

  1. Reload firewalld

    Reloads the firewalld service to apply the newly configured rules.

  2. Reload sysctl

    Reloads the sysctl configurations to apply networking changes.

Post-Kubespray Playbook - post-kubespray-setup-pb.yml

The Post playbook currently performs a single function, though it involves several tasks. It downloads the latest version of kubeadm, sets the correct configuration, and ensures proper file ownership for the user. The playbook primarily uses Ansible's file and shell modules, essentially turning a series of steps from the documentation into an automated process. Notably, these kubeadm tasks only run on hosts designated as Kubernetes control-plane nodes. I plan to expand this playbook in the future to include additional tasks, such as tests and more.

---
- name: Setup kubectl on control plane nodes
  hosts: k8s_nodes
  gather_facts: false
  tasks:

    - name: Kubectl block
      block:
        - name: Download kubectl files (latest)
          ansible.builtin.shell:
            cmd: curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
            chdir: /home/{{ ansible_user }}

        - name: Copy kubernetes admin configuration
          become: true
          ansible.builtin.shell:
            cmd: cp /etc/kubernetes/admin.conf /home/{{ ansible_user }}/config
            chdir: /home/{{ ansible_user }}

        - name: Remove existing .kube directory
          ansible.builtin.file:
            path: /home/{{ ansible_user }}/.kube
            state: absent

        - name: Create fresh .kube directory
          ansible.builtin.file:
            path: /home/{{ ansible_user }}/.kube
            state: directory
            mode: '0755'

        - name: Move kubernetes admin configuration
          ansible.builtin.shell:
            cmd: mv config .kube/
            chdir: /home/{{ ansible_user }}

        - name: Correct ownership of .kube config
          become: true
          ansible.builtin.file:
            path: /home/{{ ansible_user }}/.kube/config
            owner: "{{ ansible_user }}"
            group: 1000

      when: hostvars[inventory_hostname]['master_node']

Building the K8s Cluster (Running the Playbook)

pre-kubespray-setup-pb.yml

To start, the inventory file needs to be created or modified. If it was pulled down from the Github repository, you will only need to modify it according to your needs. If not, the example inventory.yml file above in this post can be used. In the example file there is just a single control-plane node and etcd server (rocky9-lab-node1). All 3 nodes are set as worker nodes.

To execute the playbook run the following command -

ansible-playbook pre-kubespray-setup-pb.yml -i inventory.yml --ask-become-pass --ask-pass

NOTE: This assumes all previous setup was completed, the python virtual environment is active, and kubespray-addons folder is adjacent to the main kubespray folder. Otherwise this playbook will fail.

The ssh/sudo passwords for the K8s nodes will need inputted.

Below is an example output of successful Pre Kubespray playbook run -

(kubespray_env) [jeff@rocky9-lab-mgmt kubespray-addons]$ ansible-playbook pre-kubespray-setup-pb.yml -i inventory.yml --ask-become-pass --ask-pass
SSH password: 
BECOME password[defaults to SSH password]: 

PLAY [Pre Kubespray Setup] ***************************************************************************************************************************

TASK [Install collections from requirements.yml] *****************************************************************************************************
changed: [rocky9-lab-node1 -> localhost]

TASK [Generate SSH key pair] *************************************************************************************************************************
ok: [rocky9-lab-node1 -> localhost]

TASK [Ensure the SSH public key is present on the remote host] ***************************************************************************************
skipping: [rocky9-lab-mgmt]
changed: [rocky9-lab-node1]
changed: [rocky9-lab-node2]
changed: [rocky9-lab-node3]

TASK [Add entries to /etc/hosts] *********************************************************************************************************************
changed: [rocky9-lab-node2] => (item=rocky9-lab-node1)
changed: [rocky9-lab-node1] => (item=rocky9-lab-node1)
changed: [rocky9-lab-node3] => (item=rocky9-lab-node1)
changed: [rocky9-lab-node1] => (item=rocky9-lab-node2)
changed: [rocky9-lab-node2] => (item=rocky9-lab-node2)
changed: [rocky9-lab-node3] => (item=rocky9-lab-node2)
ok: [rocky9-lab-mgmt] => (item=rocky9-lab-node1)
changed: [rocky9-lab-node2] => (item=rocky9-lab-node3)
changed: [rocky9-lab-node1] => (item=rocky9-lab-node3)
changed: [rocky9-lab-node3] => (item=rocky9-lab-node3)
changed: [rocky9-lab-node2] => (item=rocky9-lab-mgmt)
changed: [rocky9-lab-node1] => (item=rocky9-lab-mgmt)
ok: [rocky9-lab-mgmt] => (item=rocky9-lab-node2)
changed: [rocky9-lab-node3] => (item=rocky9-lab-mgmt)
ok: [rocky9-lab-mgmt] => (item=rocky9-lab-node3)
ok: [rocky9-lab-mgmt] => (item=rocky9-lab-mgmt)

PLAY [Build Kubespray inventory and additional k8s node setup] ***************************************************************************************

TASK [Create inventory directory if it does not exist] ***********************************************************************************************
ok: [rocky9-lab-node1 -> localhost]

TASK [Generate inventory.yml for kubespray using Jinja2] *********************************************************************************************
ok: [rocky9-lab-node2 -> localhost]
ok: [rocky9-lab-node3 -> localhost]
ok: [rocky9-lab-node1 -> localhost]

TASK [Copy completed template to kubespray inventory folder] *****************************************************************************************
changed: [rocky9-lab-node1 -> localhost]
changed: [rocky9-lab-node2 -> localhost]
changed: [rocky9-lab-node3 -> localhost]

TASK [Disable swap] **********************************************************************************************************************************
changed: [rocky9-lab-node1]
changed: [rocky9-lab-node3]
changed: [rocky9-lab-node2]

TASK [Remove swap entry from /etc/fstab] *************************************************************************************************************
changed: [rocky9-lab-node2]
changed: [rocky9-lab-node1]
changed: [rocky9-lab-node3]

TASK [Load necessary kernel modules] *****************************************************************************************************************
changed: [rocky9-lab-node1] => (item=br_netfilter)
changed: [rocky9-lab-node3] => (item=br_netfilter)
changed: [rocky9-lab-node2] => (item=br_netfilter)
changed: [rocky9-lab-node1] => (item=overlay)
changed: [rocky9-lab-node2] => (item=overlay)
changed: [rocky9-lab-node3] => (item=overlay)

TASK [Ensure kernel modules are loaded on boot] ******************************************************************************************************
changed: [rocky9-lab-node2]
changed: [rocky9-lab-node3]
changed: [rocky9-lab-node1]

TASK [Configure sysctl for Kubernetes networking] ****************************************************************************************************
changed: [rocky9-lab-node1]
changed: [rocky9-lab-node2]
changed: [rocky9-lab-node3]

TASK [Apply sysctl settings] *************************************************************************************************************************
changed: [rocky9-lab-node2]
changed: [rocky9-lab-node1]
changed: [rocky9-lab-node3]

TASK [Configure firewall rules for Kubernetes] *******************************************************************************************************
ok: [rocky9-lab-node1] => (item=ssh)
ok: [rocky9-lab-node2] => (item=ssh)
ok: [rocky9-lab-node3] => (item=ssh)
changed: [rocky9-lab-node3] => (item=http)
changed: [rocky9-lab-node1] => (item=http)
changed: [rocky9-lab-node2] => (item=http)
...output ommitted for brevity...
changed: [rocky9-lab-node2] => (item=etcd-server)
changed: [rocky9-lab-node3] => (item=kubelet-worker)
changed: [rocky9-lab-node3] => (item=etcd-server)

RUNNING HANDLER [Reload firewalld] *******************************************************************************************************************
changed: [rocky9-lab-node1]
changed: [rocky9-lab-node2]
changed: [rocky9-lab-node3]

RUNNING HANDLER [Reload sysctl] **********************************************************************************************************************
changed: [rocky9-lab-node2]
changed: [rocky9-lab-node1]
changed: [rocky9-lab-node3]

PLAY RECAP *******************************************************************************************************************************************
rocky9-lab-mgmt            : ok=1    changed=0    unreachable=0    failed=0    skipped=1    rescued=0    ignored=0   
rocky9-lab-node1           : ok=16   changed=13   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
rocky9-lab-node2           : ok=13   changed=12   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
rocky9-lab-node3           : ok=13   changed=12   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

There should be many changes for the K8s nodes and no failures.

The Kubernetes nodes should now be ready to setup/run Kubernetes via Kubespray.

Kubespray

The next step is executing the Kubespray cluster build playbook, which should be very easy now. We will use the k8s-hosts.yml that was created from the Pre-Kubespray playbook as the Kubespray required inventory. It is located in the main Kubespray directory within the inventory folder. You can see the contents of this file below -

all:
  hosts:
    rocky9-lab-node1:
      ansible_host: 172.16.99.25
      ip: 172.16.99.25
      access_ip: 172.16.99.25
    rocky9-lab-node2:
      ansible_host: 172.16.99.26
      ip: 172.16.99.26
      access_ip: 172.16.99.26
    rocky9-lab-node3:
      ansible_host: 172.16.99.27
      ip: 172.16.99.27
      access_ip: 172.16.99.27

  children:
    kube_control_plane:
      hosts:
        rocky9-lab-node1:

    kube_node:
      hosts:
        rocky9-lab-node1:
        rocky9-lab-node2:
        rocky9-lab-node3:

    etcd:
      hosts:
        rocky9-lab-node1:

    k8s_cluster:
      children:
        kube_control_plane:
        kube_node:
    calico_rr:
      hosts: {}

Change into the main Kubespray directory and execute the playbook like below -

ansible-playbook -i inventory/k8s-hosts.yml --ask-pass --become --ask-become-pass cluster.yml

NOTE: Kubespray/Kubernetes requires ‘root’ access to run successfully hence the --become. SSH/Sudo passwords are also again required.

Kubespray can take 15-20 mins to finish execution. The output is vast so I won’t be pasting an example in here. A successful run should look like the below output, once it finishes -

PLAY RECAP ***********************************************************************************************************************************************
rocky9-lab-node1           : ok=649  changed=88   unreachable=0    failed=0    skipped=1090 rescued=0    ignored=6   
rocky9-lab-node2           : ok=415  changed=36   unreachable=0    failed=0    skipped=625  rescued=0    ignored=1   
rocky9-lab-node3           : ok=416  changed=37   unreachable=0    failed=0    skipped=624  rescued=0    ignored=1   

Saturday 05 October 2024  13:39:17 -0400 (0:00:00.115)       0:07:36.442 ****** 
=============================================================================== 
kubernetes/kubeadm : Join to cluster ------------------------------------------------------------------------------------------------------------- 21.11s
kubernetes/control-plane : Kubeadm | Initialize first control plane node ------------------------------------------------------------------------- 20.15s
download : Download_container | Download image if required --------------------------------------------------------------------------------------- 11.65s
download : Download_container | Download image if required --------------------------------------------------------------------------------------- 10.34s
container-engine/runc : Download_file | Download item --------------------------------------------------------------------------------------------- 8.51s
container-engine/containerd : Download_file | Download item --------------------------------------------------------------------------------------- 8.25s
container-engine/crictl : Download_file | Download item ------------------------------------------------------------------------------------------- 8.19s
container-engine/nerdctl : Download_file | Download item ------------------------------------------------------------------------------------------ 8.16s
download : Download_container | Download image if required ---------------------------------------------------------------------------------------- 7.65s
etcd : Reload etcd -------------------------------------------------------------------------------------------------------------------------------- 6.14s
container-engine/crictl : Extract_file | Unpacking archive ---------------------------------------------------------------------------------------- 6.08s
container-engine/nerdctl : Extract_file | Unpacking archive --------------------------------------------------------------------------------------- 5.62s
download : Download_container | Download image if required ---------------------------------------------------------------------------------------- 5.23s
etcd : Configure | Check if etcd cluster is healthy ----------------------------------------------------------------------------------------------- 5.23s
kubernetes-apps/ansible : Kubernetes Apps | Lay Down CoreDNS templates ---------------------------------------------------------------------------- 4.75s
kubernetes-apps/ansible : Kubernetes Apps | Start Resources --------------------------------------------------------------------------------------- 4.05s
download : Download_container | Download image if required ---------------------------------------------------------------------------------------- 4.02s
download : Download_container | Download image if required ---------------------------------------------------------------------------------------- 3.58s
network_plugin/cni : CNI | Copy cni plugins ------------------------------------------------------------------------------------------------------- 3.25s
download : Download_file | Download item ---------------------------------------------------------------------------------------------------------- 3.00s

Kubespray execution can sometimes fail due to connectivity issues or similar problems, especially when pulling down multiple container images, which might time out. If this happens, simply re-run the playbook as described earlier. It will pick up where it left off, skipping the tasks that have already been successfully completed.

If you want to wipe out the Kubespray/Kubernetes cluster, Kubespray does give you a playbook for that as well. That can be executed using the below example -

ansible-playbook -i inventory/k8s-hosts.yml --ask-pass --become --ask-become-pass reset.yml

post-kubespray-setup-pb.yml

After successfully creating a K8s cluster using Kubespray the last piece required is configuring kubectl on the control-plane nodes. To do this change back into the kubespray-addons directory. The Post-Kubespray playbook can be executed as seen below -

ansible-playbook post-kubespray-setup-pb.yml -i inventory.yml --ask-pass --ask-become-pass

A successful execution output should look something like seen here -

(kubespray_env) [jeff@rocky9-lab-mgmt kubespray-addons]$ ansible-playbook post-kubespray-setup-pb.yml -i inventory.yml --ask-pass --ask-become-pass 
SSH password: 
BECOME password[defaults to SSH password]: 

PLAY [Setup kubectl on control plane nodes] **********************************************************************************************************

TASK [Download kubectl files (latest)] ***************************************************************************************************************
skipping: [rocky9-lab-node2]
skipping: [rocky9-lab-node3]
changed: [rocky9-lab-node1]

TASK [Copy kubernetes admin configuration] ***********************************************************************************************************
skipping: [rocky9-lab-node2]
skipping: [rocky9-lab-node3]
changed: [rocky9-lab-node1]

TASK [Remove existing .kube directory] ***************************************************************************************************************
skipping: [rocky9-lab-node2]
skipping: [rocky9-lab-node3]
ok: [rocky9-lab-node1]

TASK [Create fresh .kube directory] ******************************************************************************************************************
skipping: [rocky9-lab-node2]
skipping: [rocky9-lab-node3]
changed: [rocky9-lab-node1]

TASK [Move kubernetes admin configuration] ***********************************************************************************************************
skipping: [rocky9-lab-node2]
skipping: [rocky9-lab-node3]
changed: [rocky9-lab-node1]

TASK [Correct ownership of .kube config] *************************************************************************************************************
skipping: [rocky9-lab-node2]
skipping: [rocky9-lab-node3]
changed: [rocky9-lab-node1]

PLAY RECAP *******************************************************************************************************************************************
rocky9-lab-node1           : ok=6    changed=5    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
rocky9-lab-node2           : ok=0    changed=0    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0   
rocky9-lab-node3           : ok=0    changed=0    unreachable=0    failed=0    skipped=6    rescued=0    ignored=0

NOTE: You should see ‘changed’ only for nodes designated as control-plane nodes.

Closing Thoughts

If all 3 playbooks ran successfully, CONGRATULATIONS, you should have a fully working Kubernetes cluster. To confirm this, log into any of the cluster control-plane nodes and run the command kubectl get nodes. You will hopefully see the following output result -

[jeff@rocky9-lab-node1 ~]$ kubectl get nodes
NAME               STATUS   ROLES           AGE   VERSION
rocky9-lab-node1   Ready    control-plane   39m   v1.30.4
rocky9-lab-node2   Ready    <none>          39m   v1.30.4
rocky9-lab-node3   Ready    <none>          39m   v1.30.4

The Kubernetes cluster is fully set up, providing a solid foundation for what’s coming in the next post, and eventually Containerlab/Clabernetes. You can also use this cluster to dive deeper into the world of Kubernetes beyond what we’re covering here. Experiment, expand the cluster, tear it down, and rebuild it—become an expert if you wish. Hopefully, this post makes the entry into Kubernetes a bit easier for those starting out.

What’s next?

I have at least two more pieces I will be adding to this series -

  1. Building a Cluster (Part 1)

  2. Adding Built-in Storage Cluster using MicroCeph (Part 2)

  3. Setting up and exploring Containerlab/Clabernetes (Part 3)

I also plan to add posts covering specific topology examples, integration with other tools, and network automation testing. These topics may either extend this series or become their own separate posts. There’s always a wealth of topics to explore and write about.

You can find the code that goes along with this post here (Github).

Thoughts, questions, and comments are appreciated. Please follow me here at Hashnode or connect with me on Linkedin.

Thank you for reading fellow techies!

0
Subscribe to my newsletter

Read articles from Jeffrey Lyon directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Jeffrey Lyon
Jeffrey Lyon