Setup 1 Master 1 Worker Node on Centos7

Nikita ShindeNikita Shinde
11 min read

Let's first go over the fundamental definition before moving on to the setup.

What is a Kubernetes Cluster?

A Kubernetes cluster is the backbone of container orchestration, comprising at least two nodes: one master and one worker. The master oversees administrative tasks, while worker nodes host pods running user applications.

Master Node Functions:

  • Administration Hub: Manages scheduling, cluster state, pod replacement, and workload distribution.

Worker Node Components:

  • kube-proxy: A network proxy ensuring efficient communication among all cluster pods.

  • kubelet: The pod manager, initiating, maintaining, and reporting pod states to the master.

  • Container Runtime: Responsible for spinning up containers and facilitating OS interaction.

Project Dependency Alert:

As of Kubernetes version 1.24, Docker has been deprecated, making it advisable to use containerd or CRI-O as runtimes. For Docker enthusiasts, cri-dockerd bridges Docker with Kubernetes Container Runtime Interface (CRI).

Before setting up a Kubernetes cluster, ensure that you have the following prerequisites:

  • System Requirements:

    • Master Node:

      • A dedicated machine to act as the Kubernetes master node.

      • Adequate CPU and memory resources.

      • CentOS 7 operating system.

    • Worker Node(s):

      • One or more dedicated machines to act as Kubernetes worker nodes.

      • Adequate CPU and memory resources.

      • CentOS 7 operating system.

  • Software Requirements:

    • containerd:

      • Container runtime required for Kubernetes (Docker is deprecated as of Kubernetes version 1.24).

      • Installed and configured on both master and worker nodes.

  • Kubernetes Components:

    • kubelet: Manages pods on the node.

    • kubeadm: Used to initialize the cluster.

    • kubectl: The command-line tool for interacting with the cluster.

    • Installed on both master and worker nodes.

  • Network Configuration:

    • Module Loading:

      • overlay and br_netfilter kernel modules loaded on all nodes.
    • Kernel Parameters:

      • Network-related sysctl settings configured on all nodes.
    • Firewall Configuration:

      • Firewall rules are configured to allow communication between nodes on specific ports.
    • Iptables Settings:

      • Iptables settings adjusted for proper packet processing.
    • SELinux and SWAP:

      • SELinux disabled or set to permissive mode.

      • SWAP disabled on all nodes.

Chapter 1: Installing containerd for Kubernetes.

1.1 Configure Prerequisites

  • Step 1: Load Required Modules

    Before installing Kubernetes components, it's crucial to load the necessary modules and configure them to load at boot time.

      sudo modprobe overlay
      sudo modprobe br_netfilter
    

    Explanation:

    • modprobe overlay: Loads the overlay kernel module, essential for containerd.

    • modprobe br_netfilter: Loads the bridge netfilter module, required for networking

  • Step 2: Configure Module Loading

    Create a configuration file to load modules at boot time.

      cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
      overlay
      br_netfilter
      EOF
    

    Explanation:

    • sysctl settings: Configure network-related settings.
  • Step 3: Set Up Other Prerequisites

    Configure additional prerequisites without the need for a system restart.

      cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
      net.bridge.bridge-nf-call-iptables = 1
      net.ipv4.ip_forward = 1
      net.bridge.bridge-nf-call-ip6tables = 1
      EOF
      sudo sysctl –system
    

    Explanation:

    • tee: Writes to files.

    • modules-load.d/containerd.conf: Loads required modules at boot time.

    • sysctl.d/99-kubernetes-cri.conf: Sets up networking parameters.

1.2 Install containerd

  • Step 4: Add Docker Repository

    Add the official Docker repository to access containerd.

      sudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.repo
    

    Explanation:

    • dnf config-manager: Manages configuration.
  • Step 5: Update and Install

    Update the system and install the containerd package.

      sudo dnf update
      sudo dnf install -y containerd
    

    Explanation:

    • dnf update: Updates system packages.

    • dnf install: Installs containerd.

  • Step 6: Create Configuration File

    Create a configuration file for containerd and set it as the default.

      sudo mkdir -p /etc/containerd
      sudo containerd config default | sudo tee /etc/containerd/config.toml
    

    Explanation:

    • mkdir: Creates a directory.

    • containerd config default: Generates default configuration.

  • Step 7: Set cgroupdriver to systemd

    Edit the containerd configuration file to set the cgroupdriver to systemd.

      sudo vi /etc/containerd/config.toml
    

    Explanation:

    • vi: Opens the text editor.

    • SystemdCgroup: Ensures compatibility with Kubernetes.

Find the following section:

    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]

Change the value of SystemdCgroup to true.

    SystemdCgroup = true

Once you are done, match the section in your file to the following:

    [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
                BinaryName = ""
                CriuImagePath = ""
                CriuPath = ""
                CriuWorkPath = ""
                IoGid = 0
                IoUid = 0
                NoNewKeyring = false
                NoPivotRoot = false
                Root = ""
                ShimCgroup = ""
                SystemdCgroup = true
  • Step 8: Restart containerd

    Apply the changes made in the previous step by restarting containerd.

      sudo systemctl restart containerd
    
  • Step 9: Verify containerd Installation.

    Ensure that containerd is running with the following command.

      ps -ef | grep containerd
    

    Explanation:

    • ps -ef: Displays information about processes.

    • grep containerd: Filters the output for containerd.

If successfully running, the output should resemble:

root 63087 1 0 13:16 ? 00:00:00 /usr/bin/containerd

Chapter 2: Installing Kubernetes on CentOS 7

Section 1: Installing Dependencies

  • Step 1: Install curl

    Before diving into Kubernetes installation, make sure to install the necessary dependencies, starting with curl.

      sudo dnf install curl
    

    Explanation:

    • sudo dnf install: Installs the curl package.

    • curl: A command-line tool for transferring data.

  • Step 2: Configure Kubernetes Repository

    As Kubernetes packages are not available from official CentOS 7 repositories, configure the Kubernetes repository. This step is crucial for both the Master Node and each Worker Node.

      cat <<EOF > /etc/yum.repos.d/kubernetes.repo
      [kubernetes]
      name=Kubernetes
      baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
      enabled=1
      gpgcheck=1
      repo_gpgcheck=1
      gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg <https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg>
      EOF
    

    Explanation:

    • cat <<EOF >: Creates and writes to a file in one command.

    • yum.repos.d/kubernetes.repo: The repository configuration file for Kubernetes.

Section 2: Installing Kubernetes Components

  • Step 1: Install kubelet, kubeadm, and kubectl

    The core Kubernetes components, kubelet, kubeadm, and kubectl, are essential for managing your cluster. Install these packages on each node.

      sudo yum install -y kubelet kubeadm kubectl
      systemctl enable kubelet
      systemctl start kubelet
    

    Explanation:

    • sudo yum install: Installs the Kubernetes packages.

    • systemctl enable/start kubelet: Ensures kubelet starts on boot and is immediately started.

  • Step 2: Set Hostname on Nodes

    Assign a unique hostname to each node using the following commands.

      sudo hostnamectl set-hostname master-node
      sudo hostnamectl set-hostname worker-node1
    

    Explanation:

    • sudo hostnamectl set-hostname: Sets the hostname for the node.
  • Step 3: Configure Firewall

    Enable communication across the cluster by configuring firewalld and adding the necessary ports.

      sudo systemctl start firewalld  # Start firewalld if not running
      sudo firewall-cmd --permanent --add-port=6443/tcp
      sudo firewall-cmd --permanent --add-port=2379-2380/tcp
      sudo firewall-cmd --permanent --add-port=10250/tcp
      sudo firewall-cmd --permanent --add-port=10251/tcp
      sudo firewall-cmd --permanent --add-port=10252/tcp
      sudo firewall-cmd --permanent --add-port=10255/tcp
      sudo firewall-cmd --reload
    

    Explanation:

    • firewall-cmd --add-port: Adds ports to the firewall configuration.
  • Step 4: Update Iptables Settings

    Adjust Iptables settings to ensure proper packet processing.

      cat <<EOF > /etc/sysctl.d/k8s.conf
      net.bridge.bridge-nf-call-ip6tables = 1
      net.bridge.bridge-nf-call-iptables = 1
      EOF
      sysctl --system
    

    Explanation:

    • cat <<EOF >: Writes to a configuration file.

    • sysctl --system: Applies the sysctl settings.

  • Step 5: Disable SELinux

    Disable SELinux to allow containers access to the host filesystem.

      sudo setenforce 0
      sudo sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
    

    Explanation:

    • sudo setenforce: Sets SELinux mode.

    • sudo sed -i: Edits SELinux configuration file.

  • Step 6: Disable SWAP

    Disable SWAP to enable proper kubelet functionality.

      sudo sed -i '/swap/d' /etc/fstab
      sudo swapoff -a
    

    Explanation:

    • sudo sed -i: Edits the filesystem table configuration file.

    • sudo swapoff -a: Disables SWAP.

Chapter 3: Deploying and Initializing the Kubernetes Cluster

Section 1: Initializing the Cluster

  • Step 1: Deploying the Cluster

    Execute the following command on the master node to initialize the Kubernetes cluster.

      sudo kubeadm init
    

    Explanation:

    • sudo kubeadm init: Initializes the Kubernetes control-plane on the master node.
  • Step 2: Post-Initialization Steps

    After successful initialization, follow the provided instructions to set up your user configuration.

      mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    

    Alternatively, for root users:

      export KUBECONFIG=/etc/kubernetes/admin.conf
    

    Explanation:

    • mkdir -p: Creates the directory for Kubernetes configuration.

    • sudo cp -i: Copies the admin configuration to the user's home directory.

    • sudo chown: Sets ownership of the configuration file.

  • Step 3: Deploying a Pod Network

    Deploy a pod network to enable communication within the cluster.

      kubectl apply -f <https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml>
    

    Explanation:

    • kubectl apply -f: Applies the Flannel pod network configuration.
  • Step 4: Verifying Master Node

    Check if the master node is ready.

      sudo kubectl get nodes
    

    Expect an output similar to:

      NAME          STATUS   ROLES             AGE   VERSION
      master-node   Ready    control-plane     2m50s v1.24.1
    

    Also, ensure all pods are running correctly.

    • kubectl get pods --all-namespaces

  • Step 5: Add Worker Node

    Move to the worker node and run the previously saved kubeadm join command from Step 3.

      kubeadm join 102.130.122.165:6443 --token uh9zuw.gy0m40a90sd4o3kl \\
              --discovery-token-ca-cert-hash sha256:24490dd585768bc80eb9943432d6beadb3df40c9865e9cff03659943b57585b2
    

    Ensure the output indicates successful joining.

  • Step 6: Verify Worker Node Joining

    Switch back to the master node and confirm the worker node's successful join.

      kubectl get nodes
    

    Set the role for the worker node.

      kubectl label node worker-node node-role.kubernetes.io/worker=worker
    

    Verify the role was set.

      kubectl get nodes
    
    1. Check Flannel Pod Logs (If Necessary): Review the logs of the Flannel pod again to ensure there are no new errors:

      1. Verify Flannel Pods: After updating the CIDR and ensuring the worker node is ready, check the status of Flannel pods:

         kubectl get pods -n <namespace>
        
         kubectl logs <flannel-pod-name> -n <namespace>
        
    2. Additional Considerations:

      • Ensure network policies, if used, are configured correctly.

      • Confirm that the Kubernetes API server is accessible from both the control plane and worker nodes.

Frequently Asked Questions (FAQ)

  1. Q: Why do we need to load specific kernel modules?

    • A: These modules are crucial for containerd to function properly, providing necessary functionalities for container operations.
  2. Q: What is the significance of setting cgroupdriver to systemd?

    • A: Kubernetes relies on systemd as the cgroupdriver for effective container management.
  3. Q: Why is disabling SELinux necessary?

    • A: SELinux, in enforcing mode, may restrict container access to the host filesystem. Disabling it or setting it to permissive mode resolves this issue.
  4. Q: What is the purpose of disabling SWAP?

    • A: Kubernetes relies on memory management, and disabling SWAP ensures optimal performance and avoids potential issues.
  5. Q: Why is Docker deprecated, and what alternative is recommended?

    • A: As of Kubernetes version 1.24, Docker has been deprecated. It is advisable to use containerd or CRI-O as runtimes. For Docker enthusiasts, cri-dockerd bridges Docker with Kubernetes Container Runtime Interface (CRI).
  6. Q: What is the purpose of configuring firewall and iptables settings?

    • A: Configuring firewall and iptables settings ensures that nodes within the cluster can communicate with each other over specific ports, essential for Kubernetes operation.
  7. Q: Why is the Flannel pod in a CrashLoopBackOff state after cluster initialization, and how can I resolve it?

    • A: The Flannel pod might be facing issues with pod CIDR assignment. You can troubleshoot by checking the pod logs, updating the worker node CIDR, and verifying the Flannel pod's recovery. Refer to the troubleshooting section in the guide for detailed steps.
  8. Q: Can I use a different pod network provider instead of Flannel, and how do I configure it?

    • A: Yes, you can choose from different pod network providers such as Calico or CNI. To configure a different provider, replace the Flannel configuration with the desired provider's YAML file during the cluster initialization step.
  9. Q: What is the purpose of the kubeadm join command, and how do I add additional worker nodes to the cluster?

    • A: The kubeadm join command is used to add worker nodes to the cluster. After initializing the master node, run the kubeadm join command on each worker node, as provided during the master node initialization process.
  10. Q: How can I scale my Kubernetes cluster by adding more worker nodes?

    • A: Scaling the cluster involves adding more worker nodes. Follow the steps to initialize each new worker node using kubeadm join and ensure they join the existing cluster. Verify their status using kubectl get nodes.
  11. Q: What considerations should be taken into account when choosing a container runtime like containerd or CRI-O?

    • A: Consider factors such as community support, compatibility with Kubernetes, performance, and specific feature requirements. Both containerd and CRI-O are popular choices, and the decision may depend on your specific use case.
  12. Q: How do I troubleshoot issues related to containerd on my Kubernetes nodes?

    • A: If you encounter problems with containerd, check its status using sudo systemctl status containerd. Review containerd logs using journalctl -u containerd for error messages. Restart containerd if needed or consider reinstalling it using your system's package manager.
  13. Q: What should I do if I face issues with the kubeadm init command, and it fails to initialize the master node?

    • A: Check for error messages during the kubeadm init process. Common issues include port conflicts or network problems. Refer to the troubleshooting section for solutions, such as identifying processes using specific ports and resolving conflicts.
  14. Q: Can I use a different operating system for my Kubernetes nodes, or is CentOS 7 mandatory?

    • A: While CentOS 7 is used in the provided guide, Kubernetes supports various operating systems. Ensure compatibility and follow the appropriate installation steps for the chosen operating system.
  15. Q: Is it necessary to set up a pod network like Flannel, or can I use Kubernetes without it?

    • A: A pod network is essential for enabling communication between pods in a Kubernetes cluster. While Flannel is one option, you can choose other pod network providers like Calico or CNI based on your requirements.
  16. Q: How can I upgrade my Kubernetes cluster to a newer version?

    • A: Upgrading a Kubernetes cluster involves updating components like kubelet, kubeadm, and kubectl on all nodes. Follow the Kubernetes documentation for the specific version you are upgrading to, and carefully execute the upgrade steps provided.

Project Dependencies

  • curl:

    • Required for installing dependencies. Ensure it is installed on all nodes.
  • Kubernetes Repository:

    • Configured on all nodes to access Kubernetes packages.
  • Flannel Pod Network:

    • Deployed to enable communication within the cluster.

Conclusion

Finally, we did it...!

Setting up one master and one worker node Kubernetes cluster is a simple and effective way to get started with Kubernetes. It's a great setup for learning and experimenting, allowing you to run and manage applications easily. Now that your cluster is up and running, you're all set to dive deeper into the world of Kubernetes.

1
Subscribe to my newsletter

Read articles from Nikita Shinde directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Nikita Shinde
Nikita Shinde