A Comprehensive Guide to Deploying a Secure and Scalable EKS Cluster on AWS

Hello Guys , Thanks for Stopping by my Blog today , Today we are working on Elastic Kubernetes Service Clusters, Managed by AWS.

As modern applications increasingly rely on containerized microservices, Kubernetes has become the standard for orchestrating them at scale. However, managing Kubernetes infrastructure manually can be complex, time-consuming, and error-prone.

This guide provides a comprehensive, step-by-step walk-through for deploying a secure and scalable Amazon EKS (Elastic Kubernetes Service) cluster on AWS. Whether you're aiming to build a reliable production environment or enhance your DevOps skill set, this project covers the core concepts, best practices, and deployment strategies you need to get started with confidence.

Let’s Dive in.

Step 1.

Select Region , My region for this Hands on Project will be United States N Virginia Which is us-east-1 then next is to Create an IAM User with Admin Permissions, How do we create an IAM user , on the search bar Search for IAM > Users, Click Add users In the User name field, give it a name for the course of this Project we will give it " k8-admin." Click Next and Select Attach policies directly, select AdministratorAccess click Next, click Create user.

Step 2.

Select the newly created user "k8-admin" head to Security credentials tab, we have to create an access Key , Scroll down to Access keys and select Create access key, Select Command Line Interface (CLI)option and check the acknowledgment Box at the bottom of the page, then click Next and Click create Access key. Copy out the Access Key and the Secret access key or Download the csv file. We will use the credentials when setting up the AWS CLI.

Step 3.

Next Step is to Spin up an Ec2 Instance, with the following steps,

in the search box Navigate to EC2 Instances., click Launch Instance ,give the Instance a name, i will use " Eksinstance " At the Amazon Machine Image (AMI) drop down, select the Amazon Linux 2 AMI, Leave t2.micro selected under Instance type In the Key pair (login) box, select Create new key pair, Give it a Key pair name for the course of this project i will use "ekskeypair " Click Create new key pair. This will download the key pair out for a later use, Expand Network settings and click on Edit., In the Network settings box Network: Leave as default, Subnet: Leave as default. and for Auto-assign Public IP we will Select Enable, then create a new security group option is clicked, Give the Security group a name, i will give mine " EKSS_G " then Click Launch instance to Lunch a fresh Ec2 Instance.

Step 4.

Click on the instance ID, the we will wait for few minutes for the EC2 instance to enter its running state, after that, we check the box near the Instance name and click on connect.then under the Ec2 Instance connect tab , our connection option will be Connect using Ec2 Instance connect and click connect. and we will wait for our terminal to open in a new browser.

Step 5.

Next step is to check the AWS CLI version attached to the ec2 Instance , command to check for the version is " aws --version " It should be an older version. then we have to update it to the latest version V2 with command " curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" then we unzip the file with command " unzip awscliv2.zip " so to verify the path which the latest AWS CLI was installed we run the command " which aws " so it shows us it was installed in " /usr/bin/aws " so we update the CLI with command " sudo ./aws/install --bin-dir /usr/bin --install-dir /usr/bin/aws-cli --update " we will get a prompt that we can now run " /usr/bin/aws --version " So this will show us we have Aws Cli Version 2 installed successfully.

Step 6.

Next step is to configure the AWS CLI , we will use the command " aws configure " , then it will ask for the AWS Access Key ID, paste in the access key ID we copied earlier, press enter it will ask for the AWS Secret Access Key, paste in the secret access key we copied earlier, press enter, For region name, enter your default region mine is " us-east-1 " yours could be any default region you selected. press enter then for Default output format, enter json.

After that we will download Kubectl the command to downlod Kubectl is " curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.16.8/2020-04-16/bin/linux/amd64/kubectl next is we apply execute permissions to the binary: command is " chmod +x ./kubectl " meaning we are changing file permission for Kubectl to be able to execute permissions in the current directory which is the Kubectl binary we downloaded earlier. so the next thing we have to do is to Copy the binary to a directory in our path: command is " mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin " so we have to ensure that Kubectl is truly downloaded and installed on our server, we run this command to check " kubectl version --short --client " we will notice we have Kubectl installed with version v1.16.8-eks-e16311

Step 7.

Next thing we have to do is to download eksctl command is " curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp " then we move the extracted binary to /usr/bin the command to run is " sudo mv /tmp/eksctl /usr/bin " then we check for the version of the eksctl we just installed on our server command is " eksctl version "

Step 8.

This Next step is very important, here in this step we want to Provision an EKS Cluster with 3 Worker nodes in our Region " us-east-1 " The command to run is " eksctl create cluster --name dev --region us-east-1 --nodegroup-name standard-workers --node-type t3.medium --nodes 3 --nodes-min 1 --nodes-max 4 --managed " which means we will create our cluster and name it "dev" in our region " us-east-1 " Our nodegroup name us "standard-workers which helps identify/manage nodes later our node type is t3.medium this sets the EC2 instance type for each worker node here, a t3.medium starting with 3 nodes when the cluster is created , nodes minimum is 1 meaning to allow autoscaling to a minimum of 1 node and nodes maximum is 4 meaning allow autoscaling to a maximum of 4 nodes. --managed means to tell AWS to create a managed node group, which means AWS will handle updates and health monitoring for us and managed it for us.

Step 9.

  1. Verifying Your EKS Cluster Setup via the AWS Console

Once you've deployed your EKS cluster using eksctl, it's important to verify that everything has been provisioned correctly. Let’s walk through the AWS Console to confirm the control plane, worker nodes, and networking setup. EKS uses AWS CloudFormation under the hood to spin up the required infrastructure.

Go to the AWS Management Console and navigate to CloudFormation. Locate the stack named eksctl-dev-cluster — this represents your control plane. Click on the stack name, then go to the Events tab to view the creation progress of resources (VPCs, IAM roles, etc.). You should also see a second stack being created — this one is for your worker node group. Wait for both stacks to reach the CREATE_COMPLETE status.

Step 10.

Review Your EKS Cluster in the Console

Now let’s explore the actual EKS cluster you just created: Navigate to Elastic Kubernetes Service > Clusters. Click on your cluster name (dev in this example).

If you see a message like "Your current user or role does not have access to Kubernetes objects on this EKS cluster", don’t panic — this only means you haven't configured kubectl access yet. It won't block the next steps.

  1. Explore Cluster Configuration

There are several useful tabs within the cluster’s dashboard:

Compute tab:

Under this tab, click on the listed node group (e.g., standard-workers). You’ll see details like: Kubernetes version EC2 instance type (e.g., t3.medium) Node group status and scaling configuration

Networking tab: Shows the VPC, subnets, and other networking resources created for the cluster. Logging tab: Displays control plane logging settings. These can be configured to push logs to CloudWatch for easier monitoring.

Note: The EKS control plane is fully managed and abstracted. You can't SSH into it like a regular EC2 instance. You interact with it using tools like kubectl, eksctl, or the AWS Console.

Step 11.

Check Your Worker Nodes in EC2 EKS worker nodes are just EC2 instances running inside your AWS account. Navigate to EC2 > Instances. Close out of the existing CLI window, if you still have it open. You should see the EC2 instances launched for your EKS node group (e.g., three t3.medium instances if you followed the earlier setup).

To confirm they’re reachable: Select one of the instances (not your EKS control plane — it's managed). Click Connect at the top of the page In the Connect to instance dialog, select EC2 Instance Connect (browser-based SSH). Click Connect to open a terminal session in your browser. This confirms that your worker nodes are live and reachable — and you're all set for deploying workloads onto your EKS cluster!

Step 12.

in our Browser Based terminal check the cluster, command to run is " eksctl get cluster " then to Enable it to connect to our cluster, we run the command " aws eks update-kubeconfig --name dev --region us-east-1 " next is to Create a Deployment on our EKS Cluster we have to Install Git command is " sudo yum install -y git " then after that we will Download the course files with command " git clone https://github.com/ACloudGuru-Resources/Course_EKS-Basics " Then we Change directory with command " cd Course_EKS-Basics " we ll Take a look at the deployment file command to do that is " cat nginx-deployment.yaml " we ll take a look at the service file command to run is " cat nginx-svc.yaml "

Then we Create the service with command " kubectl apply -f ./nginx-svc.yaml " Next thing to do is to Check its status if it was created successfully, command to run is " kubectl get service " then Create the deployment command is " kubectl apply -f ./nginx-deployment.yaml " check the deployment status command to run is " kubectl get deployment " we ll discver we have 3/3 read Nginx deployments , then to view the pods " kubectl get pod " then to View the Replica Sets command is " kubectl get rs " it will show our deployment replicated in 3 . to view our Nodes command line is " kubectl get node " then the lastly, is to Access the application using the load balancer external IP we copied earlier http://a28aa2a796d4f43e09bf9bca0f6ba134-85784136.us-east-1.elb.amazonaws.com " You should paste it on a fresh web browser you will get the Welcome to NGINX web page.

Step 13.

Testing High Availability in Your EKS Cluster One of the powerful features of Amazon EKS is high availability — the ability to recover automatically when a node fails. Let’s simulate a failure scenario and watch Kubernetes handle it gracefully.

Steps to Simulate Node Failure:

Go to the AWS Management Console and navigate to the EC2 > Instances page. Select the worker node instances associated with your EKS cluster. Click Instance state → then choose Stop instance. Confirm the action by clicking Stop in the dialog box. Wait a few minutes. EKS (via the managed node group) will automatically detect the failure and begin launching replacement nodes to maintain the desired capacity. Open your terminal and verify the cluster status using command " kubectl get nodes " You’ll observe that the terminated nodes are marked as NotReady, and new nodes will soon appear in a Ready state — confirming that the self-healing nature of Kubernetes on EKS is working as expected.. then next is to run command to check the pods " kubectl get pod " we will see a running state for the 3 active ones and a terminating state for the Inactive Pod.

In Conclusion, We have Successfully deployed a secure and scalable EKS cluster on AWS is a key milestone in building resilient, cloud-native infrastructure. Throughout this guide, we’ve explored the foundational components of EKS, implemented best practices for managing worker nodes, and tested the high availability features that ensure workload continuity. By following a structured, hands-on approach, you can confidently apply these concepts in real-world environments and strengthen your understanding of Kubernetes in the AWS ecosystem.

See you on the Next One.

Ciao.

0
Subscribe to my newsletter

Read articles from Stillfreddie Techman directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Stillfreddie Techman
Stillfreddie Techman