Kubernetes adoption in the cloud: AWS edition

Ms. BMs. B
9 min read

This contrasts with the "declarative" approach, where you:

  1. Define the desired state in configuration files (typically YAML)

  2. Apply those files with kubectl apply

  3. Let Kubernetes figure out what actions are needed to reach that state

Most production Kubernetes environments move toward declarative management (using YAML manifests with tools like kubectl apply) as clusters grow more complex.

In Kubernetes, being "declarative" means you describe the desired state of your system rather than the steps to achieve it. This is a fundamental concept in Kubernetes architecture and operation.

When a Kubernetes cluster operates declaratively:

  1. You define what you want (the "desired state") in configuration files (typically YAML)

  2. Kubernetes continuously works to ensure the actual state matches your desired state

  3. If the actual state drifts from the desired state, Kubernetes automatically takes actions to reconcile them

This differs from imperative approaches where you would specify exact commands to create, modify, or delete resources.

The declarative approach offers several advantages:

  • Self-healing: Kubernetes automatically restores desired state if something changes

  • Versioning: Configuration files can be version-controlled like code

  • Reproducibility: The same configuration produces the same result across environments

  • Auditability: Changes to the system are documented in configuration updates

A simple example is declaring "I want 3 replicas of my application running" rather than issuing commands to start 3 individual containers. If one crashes, Kubernetes automatically starts a new one to maintain the desired state.

The Declarative Approach in Kubernetes

Let me explain how the declarative approach works with specific Kubernetes resources and controllers:

Kubernetes Controllers: The Declarative Engine

Kubernetes uses controller components that continuously monitor the cluster and ensure the actual state matches your declared desired state. Here are key examples:

  1. Deployment Controller

    • You declare: "I want 3 replicas of application v2.0 running"

    • The controller continuously monitors and:

      • Creates new pods if fewer than 3 exist

      • Terminates pods if more than 3 exist

      • Replaces pods if they fail or become unhealthy

  2. ReplicaSet Controller

    • Manages the actual pod lifecycle based on the deployment's instructions

    • Ensures the right number of identical pods are running

  3. Node Controller

    • Monitors node health and availability

    • Marks nodes as unavailable when they don't respond

    • Evicts pods from unhealthy nodes

  4. StatefulSet Controller

    • Maintains state and identity for stateful applications

    • Ensures orderly deployment, scaling, and deletion

    • Maintains persistent storage associations

The Control Loop Pattern

All controllers follow a "reconciliation loop" pattern:

  1. Observe: Monitor the current state of the system

  2. Analyze: Compare actual state with desired state

  3. Act: Take actions to bring the system closer to desired state

  4. Repeat: Continue this cycle constantly

Practical Example: Deployment Lifecycle

Here's how a deployment works declaratively:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.14.2
        ports:
        - containerPort: 80

With this declaration:

  1. You apply it: kubectl apply -f deployment.yaml

  2. The Deployment controller sees the request for 3 replicas

  3. It creates a ReplicaSet with target of 3 replicas

  4. The ReplicaSet controller creates 3 pods

  5. If a pod crashes, the ReplicaSet automatically creates a replacement

  6. If you update to replicas: 5, the controllers automatically scale up

Benefits in Practice

  1. Infrastructure as Code: Your entire infrastructure is defined in version-controlled files

  2. Idempotency: Running the same declaration multiple times produces the same result

  3. Rollbacks: Easy version control of your configurations enables simple rollbacks

  4. GitOps: Enables automated deployment pipelines based on Git repositories

  5. Drift Detection: Automated tools can compare actual state vs. declared state

Let’s get started with creating an EKS cluster using the declarative approach.

Prerequisites

  1. AWS Account: Ensure you have an AWS account with appropriate permissions

  2. Install required tools:

    • AWS CLI (v2 recommended)

    • kubectl

    • eksctl

Steps to Create an EKS Cluster

  1. Grant IAM User Admin Permissions

  • Navigate to IAM > Users.

  • Click IAM User.

  • Click Add Permission

  • Select Attach policies directly.

  • Select AdministratorAccess.

  • Click Next.

  • Click Create user.

  1. Create access key and secret key

  • Select the IAM User.

  • Select the Security credentials tab.

  • Scroll down to Access keys and select Create access key.

  • Select Command Line Interface (CLI) and checkmark the acknowledgment at the bottom of the page.

  • Click Next.

  • Click Create access key.

  • Either copy both the access key and the secret access key and paste them into a local text file, or click Download .csv file. We will use the credentials when setting up the AWS CLI.

  • Click Done.

  1. Create an EC2 Instance and Connect

You can check my previous blog on how to create and connect to an EC2 instance.

  1. Download the AWS Command Line Interface (AWS CLI) version 2 installer for Linus x86-64 architecture

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

Next, unzip the file using the command below:

unzip awscliv2.zip

Run the

which aws

If AWS CLI is installed on your system, this command will output the full path where the aws executable is located, such as:

/usr/local/bin/aws

Next, go ahead and run this command:

bashsudo ./aws/install --bin-dir /usr/bin --install-dir /usr/bin/aws-cli --update

This command installs or updates the AWS Command Line Interface (CLI) with specific installation parameters. Here's what each part does:

  • sudo: Runs the command with elevated permissions (as administrator/root user)

  • ./aws/install: Executes the install script located in the aws directory in your current location

  • --bin-dir /usr/bin: Specifies that the main AWS CLI program should be installed in the /usr/bin directory, making it accessible system-wide

  • --install-dir /usr/bin/aws-cli: Sets the location where all AWS CLI files will be installed to /usr/bin/aws-cli

  • --update: Tells the installer to update an existing AWS CLI installation if one exists

It installs the AWS CLI to system directories, making it available to all users on the system, and updates any existing installation. The specified paths ensure the AWS CLI is in a standard location in the system path.

Now, run aws configure and input all the needed information.

  1. Download Kubectl

Then you download kubectl with this command

curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/linux/amd64/kubectl

Go ahead and give permission to make the kubectl file executable because by default, downloading a binary like kubectl (the Kubernetes command-line tool), may not have execute permissions.

chmod +x ./kubectl

Copy the binary to a directory in your path by running:

mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin

The above command does three (3) things: 1. Creates a bin directory in your home folder if it doesn't already exist. This is a common place to put user-specific executable files, 2. Copies the kubectl executable to your personal bin directory and updates your shell’s PATH so the system can find and run programs in $HOME/bin without needing to type the full path.

You can confirm what version of kubectl you have installed by running:

kubectl version --short --client
  1. Download eksctl and move the extracted binary to /usr/bin

#!/bin/bash

# Download the latest eksctl release tarball for your OS and extract it to /tmp

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

# Move the extracted eksctl binary from /tmp to /usr/bin (requires sudo)

sudo mv /tmp/eksctl /usr/bin
  1. Provision an EKS Cluster

Here is your eksctl command to provision an EKS cluster with two worker nodes in ca-central-1

#!/bin/bash

eksctl create cluster \
  --name dev \
  --region ca-central-1 \
  --nodegroup-name standard-workers \
  --node-type t3.medium \
  --nodes 2 \
  --nodes-min 1 \
  --nodes-max 3 \
  --managed

The process of creating the EKS cluster takes between 10-15minutes as it is provisioning the control plane and worker nodes, creating the VPC, security group, auto scaling group and attaching the worker nodes to the control plane.

Enable EKS connect to the cluster you just created by running the command:

eks update-kubeconfig --name dev --region ca-central-1
  1. Create a deployment on your EKS cluster

Let’s install git and clone a GitHub repository

sudo yum install -y git

Cloning a GitHub Repository:

git clone https://github.com/ACloudGuru-Resources/Course_EKS-Basics

Run these below commands to view the content of the configuration file

#!/bin/bash

# Change to the Course_EKS-Basics directory
cd Course_EKS-Basics

# Take a look at the deployment file
cat nginx-deployment.yaml

# Take a look at the service file
cat nginx-svc.yaml

Next, create a service and check the status

#!/bin/bash

# Create the service
kubectl apply -f ./nginx-svc.yaml

# Check the service status
kubectl get service

Move ahead and run the following commands

#!/bin/bash

# Apply the deployment
kubectl apply -f ./nginx-deployment.yaml

# Check deployment status
kubectl get deployment

# View pods
kubectl get pod

# View ReplicaSets
kubectl get rs

# View nodes
kubectl get node

Let’s access the application using the load balancer. To do this, replace load balancer DNS Hostname with yours.

curl "<LOAD_BALANCER_DNS_HOSTNAME>"

Example:

If your DNS hostname is my-app-123456.elb.amazonaws.com, then the command becomes:

curl "http://my-app-123456.elb.amazonaws.com"

The output of the above command should be the HTML for a default Nginx web page.

In a new browser tab, navigate to the same IP, where we should then see the same Nginx web page.

http://"<LOAD_BALANCER_DNS_HOSTNAME>"

  1. Test the high availability feature of the EKS Cluster

To do this, go to the AWS Console, on the EC2 instance page, select the worker node and select stop. After a few minutes, the EKS would launch a new instance to keep your service running as you can see in the image below.

Let us check the status of the nodes

Allow time for changes to propagate.

#!/bin/bash

"Step 1: Checking initial node status..."
kubectl get node
"Expecting nodes to be in NotReady status."

"Step 2: Checking pod statuses..."
kubectl get pod
"Expecting a mix of Terminating, Running, and Pending pods."

"Step 3: Rechecking node status (new node may be initializing)..."
kubectl get node

"Step 4: Waiting for 2 minutes for the new node to be ready..."

"Step 5: Checking node status again..."
kubectl get node
"Expecting at least one node to be in Ready state."

"Step 6: Checking pod statuses again..."
kubectl get pod
"Expecting some pods to be Running again."

"Step 7: Checking service status..."
kubectl get svc

After running the kubectl get service command, copy the external DNS listed in the output and access the application using the load balancer; replacing <LOAD_BALANCER_DNS_HOSTNAME> with the DNS Hostname you just copied:

Example output for kubectl get service might look like:

EXTERNAL-IP                            PORT(S)          AGE
a1b2c3d4e5f6g7h8.elb.amazonaws.com     80:32000/TCP     5m

Copy the value under EXTERNAL-IP or EXTERNAL-DNS-HOSTNAME.

Access the application using curl

Replace the placeholder in the command:

curl "<LOAD_BALANCER_EXTERNAL_IP>"

with the actual DNS hostname you copied, like so:

curl "a1b2c3d4e5f6g7h8.elb.amazonaws.com"

You should see HTML content similar to the default Nginx web page.

Paste the same DNS hostname (or IP address, if given) into your browser's address bar:

http://a1b2c3d4e5f6g7h8.elb.amazonaws.com

You should see the default Nginx web page again. If it doesn't load, wait a few minutes and try again—it can take time for the Load Balancer to become fully operational.

  1. Delete all that has been created
eksctl delete cluster dev --region ca-central-1

Well done. You have successfully created and EKS cluster, deployed an application and also tested the high availability feature of the EKS.

Till my next post drops, I hope I have been able to guide you through the process of getting of creating an EKS cluster using the declarative approach.

Thank you.

1
Subscribe to my newsletter

Read articles from Ms. B directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Ms. B
Ms. B

Hi, I'm a tech enthusiast who has decided to document her cloud journey as the day goes by. Stay tuned and follow me through this journey which I believe would be a wonderful experience. I'm also a team player who loves collaborating with others to create innovative solutions.