Launching an EKS Cluster and Deploying a High-Availability Nginx Application

Joel ThompsonJoel Thompson
8 min read

Prerequisites

- An "AWS account": To provision cloud resources.

- Basic "CLI familiarity": Required for interacting with AWS and Kubernetes.

Step 1: Create an IAM User with Admin Permissions

Actions:

1. Create a new user name "eks-user" with the "AdministratorAccess" policy.

-Select ‘attach policies directly and check "AdministratorAccess" policy

2. Generate an "Access Key" under Security credentials.

Select CLI in ‘use case’

Download the Access key and secret access key or save it in a note pad

Why This Matters:

- Security: Avoid using your AWS root account for daily tasks; IAM users minimize risk.

- Least Privilege: While "AdministratorAccess" is used here for simplicity, restrict permissions further in production.

- Auditability: Dedicated users make it easier to track actions in your AWS environment.

Step 2: Launch an EC2 Instance & Configure CLI Tools

Actions:

1. Launch a "t2.micro" instance with "Amazon Linux 2" and a key pair.

-In the Key pair (login) box, select Create new key pair. Give it a Key pair name .

-Click Create new key pair. This will download the key pair for later use.

-Expand Network settings and click on Edit In the Network settings box:

- Network: Leave as default. Subnet: Leave as default. Auto-assign Public IP: Select Enable.

  • Once the instance is fully created, check the checkbox next to it and click Connect at the top of the window.

  • In the Connect to your instance dialog, select EC2 Instance Connect (browser-based SSH connection). Click Connect

2. Install/update "AWS CLI v2", "kubectl", and "eksctl".

(a)AWS CLI v2

Run aws --version It should be an older version.

Download v2: curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

-To unzip the file, run unzip awscliv2.zip

- See where the current AWS CLI is installed:

-Run which aws it should be /usr/bin/aws.

Update it:

sudo ./aws/install --bin-dir /usr/bin --install-dir /usr/bin/aws-cli --update

  • Check the version of AWS CLI: aws --version

It should now be updated.

  • Configure the CLI:

- For AWS Access Key ID: paste in the access key ID you copied earlier.

- For AWS Secret Access Key, paste in the secret access key you copied earlier.

- For Default region name, enter your region e.g us-east-1.

- For Default output format, enter json.

(b)KUBECTL

- Download kubectl:- curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/ bin/linux/amd64/kubectl

- Apply execute permissions to the binary: chmod +x ./kubectl

-Copy the binary to a directory in your path: mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin

To ensure kubectl is installed: kubectl version --short --client

(c)EKSCTL

Download eksctl: curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/e ksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp

-Move the extracted binary to /usr/bin: sudo mv /tmp/eksctl /usr/bin

Get the version of eksctl: eksctl version

Why This Matters(Kubectl setup):

CommandPurposeWhy It Matters
curl -o kubectl https://amazon-eks.s3...Downloads the kubectl binary for a specific Kubernetes versionEnsures you have the CLI tool needed to interact with your Kubernetes cluster
chmod +x ./kubectlGrants execute permissions to the binaryRequired to run the command from the terminal
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/binMoves kubectl to a directory in your system’s PATH and updates your environmentLets you run kubectl from anywhere in your terminal
kubectl version --short --clientVerifies the installed kubectl versionConfirms successful installation and helps troubleshoot client-server mismatches

Why This Matters(Eksctl setup):

CommandPurposeWhy It Matters
curl --silent --location "https://github.com/weaveworks/eksctl...Downloads the latest eksctl tarball based on OSAcquires the tool for creating and managing EKS clusters easily
tar xz -C /tmpExtracts the tarball to /tmpPrepares the binary for relocation to a permanent location
sudo mv /tmp/eksctl /usr/binMoves the eksctl binary to /usr/binMakes the command globally available in your shell
eksctl versionDisplays the version of eksctlConfirms installation and helps ensure compatibility with your EKS setup

Step 3: Provision an EKS Cluster

Actions:

1. Provision an EKS cluster with three worker nodes in your region e.g eu-north-1 with t3.medium

eksctl create cluster --name dev --region eu-north-1 --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 --nodes-max 4 --managed

Note

If your EKS resources can't be deployed due to AWS capacity issues, delete your eksctl-dev-cluster CloudFormation stack and retry the command using the --zones parameter and suggested availability zones from the CREATE_FAILED message:

Note

It will take 10–15 minutes since it's provisioning the control plane and worker nodes, attaching the worker nodes to the control plane, and creating the VPC, security group, and Auto Scaling group.

● In the AWS Management Console, navigate to CloudFormation and take a look at what’s going on there. Select the eksctl-dev-cluster stack (this is our control plane). Click Events, so you can see all the resources that are being created.

● We should then see another new stack being created — this one is our node group.

Once both stacks are complete, navigate to Elastic Kubernetes Service > Clusters. Click the listed cluster.

● If you see a Your current user or role does not have access to Kubernetes objects on this EKS cluster message just ignore it, as it won't impact the next steps of the activity.

● Click the Compute tab (under Configuration), and then click the listed node group. There, we'll see the Kubernetes version, instance type, status, etc. Click dev in the breadcrumb navigation link at the top of the screen.

● Click the Networking tab (under Configuration), where we'll see the VPC, subnets, etc. Click the Logging tab (under Configuration), where we'll see the control plane logging info.

Note

The control plane is abstracted — we can only interact with it using the command line utilities or the console. It’s not an EC2 instance we can log into and start running Linux commands on.

Navigate to EC2 > Instances, where you should see the instances have been launched. Close out of the existing CLI window, if you still have it open.

Select the original t2.micro instance, and click Connect at the top of the window.

-In the Connect to your instance dialog, select EC2 Instance Connect (browser-based SSH connection). Click Connect.

-In the CLI, check the cluster: eksctl get cluster

Enable it to connect to our cluster: aws eks update-kubeconfig --name dev --region eu-north-1

Why It Matters

Without running this command, kubectl has no idea where your EKS cluster is or how to authenticate with it. This is the bridge between AWS and Kubernetes CLI.

aws eks update-kubeconfig updates your local kubeconfig file with the credentials and endpoint info for the EKS cluster named dev in the eu-north-1 region.

Once it's updated, your kubectl commands will automatically know:

  • Which cluster to target

  • Where the control plane lives (endpoint)

  • What authentication tokens to use

Step 4: Deploy an Nginx Application from A Git Repository

Actions:

1. Install Git:

sudo yum install -y git

Download the course files: git clone https://github.com/ACloudGuru-Resources/Course_EKS-Basics

Change directory: cd Course_EKS-Basics

2. Apply the "nginx-svc.yaml" (LoadBalancer) and "nginx-deployment.yaml" manifests.

Take a look at the deployment file: cat nginx-deployment.yaml

Take a look at the service file: cat nginx-svc.yaml

-Create the service: kubectl apply -f ./nginx-svc.yaml

- Check its status: kubectl get service

Copy the external DNS hostname of the load balancer, and paste it into a text file, as we'll need it in a minute.

- Create the deployment:

kubectl apply -f ./nginx-deployment.yaml

-Check its status: kubectl get deployment

-View the pods: kubectl get pod

-View the ReplicaSets: kubectl get rs

Access the application using the load balancer, replacing with the IP you copied earlier (it might take a couple of minutes to update): curl “DNS name”

The output should be the HTML for a default Nginx web page.

- In a new browser tab, navigate to the same IP, where we should then see the same Nginx web page.

Why This Matters:

To test of communication between local environment (via kubectl) and your remote EKS cluster running on AWS.

  • By applying the nginx-svc.yaml and nginx-deployment.yaml, not just deploys sample resources—it tests whether:
  • Local tools (kubectl) can successfully interact with the cluster’s API

  • Resources like Deployments and Services are being created and reflected in the cluster’s state

  • External services (like a LoadBalancer) are provisioned and exposing app correctly

  • Deployment: Replicates pods across nodes; Kubernetes self-heals if pods/nodes fail.

  • Validation: Use "curl" to confirm your app is reachable.

Step 5: Test High Availability

Actions:

1. Stop a worker node via the EC2 console.

2. Monitor Kubernetes rescheduling pods to healthy nodes.

Why This Matters:

- Resilience: EKS automatically replaces failed nodes/pods, minimizing downtime.

- Real-World Proof: Simulating failures validates your cluster’s outage-handling capability.

Conclusion

You've successfully launched an Amazon EKS cluster and deployed a highly available NGINX application using Kubernetes manifests. From setting up tooling like kubectl and eksctl, to defining services and deployments declaratively, you’ve created a resilient infrastructure that’s ready to scale. This marks a strong foundation in cloud-native operations and container orchestration—and it’s just the beginning.

0
Subscribe to my newsletter

Read articles from Joel Thompson directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Joel Thompson
Joel Thompson