Deploying Scalable App on AWS EKS

Elastic Kubernetes Service (EKS) is an AWS service that helps you manage the master nodes (control plane) of your Kubernetes setup. It automatically scales and provides self-healing functionality to your application and a lot of the heavy lifting behind the scenes. EKS is widely used by big tech companies to manage their applications via clusters, replicas and other Kubernetes resources.

However, deploying applications on EKS involves understanding Ingress controllers, service ports, node groups, and much more.

In this article, I will help you deploy a static application on AWS EKS. By the end, you'll have a practical understanding of how Kubernetes operates in a cloud environment.

This mini project may incur a small AWS cost, so don’t forget to delete the EKS cluster once you're done to avoid unnecessary charges.

Prerequisites 📝

  1. Basic knowledge on Kubernetes - You can read my article on Understanding Kubernetes

  2. kubectl – A command line tool for working with Kubernetes clusters.

  3. eksctl – A command line tool for working with EKS clusters that automates many individual tasks.

  4. AWS CLI - A command line tool for working with AWS services, including Amazon EKS.

Quick Refresher ⏪

Before diving into the practical part, let’s quickly recall these important points:

  • Kubernetes (K8s) uses a master-slave architecture, usually with 3 master nodes.

  • Master nodes manage user interactions and direct traffic to the data plane or data nodes.

  • A master node includes etcd, a scheduler, an API, and a controller.

  • A data node includes a Kube proxy, a container runtime, and a kubelet.

  • We could build this project using kubeadm (which is tedious) or kops (which isn’t ideal for debugging).

  • Therefore, EKS is the best choice here, as it is mostly managed by AWS and provides an easier way to connect worker nodes.

There are 2 ways to connect your worker nodes:

  1. EC2 Instances - You manage the worker nodes yourself.

  2. AWS Fargate - AWS manages the worker nodes (serverless).

There are 3 types of K8 Services:

  1. ClusterIP - Exposes the service internally within the cluster (default type).

  2. NodePort - Exposes the service outside the cluster using a node’s IP and port.

  3. LoadBalancer - Accessible from the internet using a public IP.

⭐Ingress - A rule-set that controls how HTTP/HTTPS traffic gets routed to your service.

⭐Ingress Controller - K8s controller that watches for Ingress resources, reads the rules, and configures itself (like a reverse proxy or load balancer) to expose the services outside the cluster over HTTP/HTTPS. Nginx Ingress Controller and AWS ALB Controller are some popular Ingress controllers.

Why use ekstcl instead of the AWS Console?

  1. Speed & Efficiency - Creating a cluster with the Console is time-consuming. With eksctl, a full production-ready EKS cluster can be spun up in one command.

  2. Scriptable for CI/CD - You can integrate eksctl commands into CI/CD pipelines (e.g., GitHub Actions, Jenkins). This is impossible with the AWS Console.

  3. Automation & Infrastructure as Code - eksctl allows you to define your EKS cluster configuration in a YAML file. In contrast, the AWS Console is manual — you'd have to click through forms every time.

Now, Let’s Deploy the App -

Step 1 - Creating the Cluster 🏗️

Make sure you have the AWS CLI installed and set up with your AWS Access Key ID and Secret Access Key. For more information, visit this link.

To create a cluster, use this command -

eksctl create cluster --name my-cluster --region us-east-1 --fargate

We create the cluster with the specified name and region, which is usually us-east-1. The fargate flag indicates that we will use AWS Fargate for our data nodes.

The creation can take up to 5-10 minutes depending on the speed of your internet.

Although we can create the cluster through the AWS console, which can be tedious, so eksctl takes care of the hassle.

Below are the screenshots from AWS EKS Console:

Step 2 - Configure kubectl 🖋️

To interact with your EKS cluster using kubectl, you need to configure your local Kubernetes context. Run the following command to update your kubeconfig file:

aws eks update-kubeconfig --name my-cluster --region us-east-1

Step 3 - Create a Fargate Profile 📦

eksctl create fargateprofile \
    --cluster my-cluster \    
    --region us-east-1 \    
    --name qreator-app \    
    --namespace qreator-fargate

This command sets up a Fargate profile so that any Pod (data node) created in the specified namespace will automatically run on AWS Fargate instead of EC2.

You can confirm the creation of your Fargate profile by clicking on the Compute tab in your cluster.

Step 4 – Deploy the YAML file ⬆️

To deploy all Kubernetes resources (Namespace, Deployment, Service, and Ingress) described in the manifest, use the following command. You can preview the contents by opening the given below link in your browser; each component in the YAML file is separated by --- for clarity.

kubectl apply -f https://raw.githubusercontent.com/Shah1011/QReator/refs/heads/main/qreator-K8.yaml

Although you can define each component separately in individual YAML files, for smaller applications, it is usually preferred to put them all in a single YAML file.

After creating the deployment, you can view them through the following kubectl commands -

  • kubectl get pods -n qreator-fargate

  • kubectl get svc -n qreator-fargate

  • kubectl get ingress -n qreator-fargate

In the image above, the Address field is empty because we haven't created the Ingress controller or the Application Load Balancer yet. Once we finish deploying our application, we will check it again.

Note: The Ingress controller is used to configure the Application Load Balancer based on the rules set in the Ingress component we defined in the cluster.

Step 5 - Configure IAM OIDC Provider 🔐

This step is necessary before creating the Application Load Balancer. Typically, the ALB is set up as an abstraction over the EC2 instance. To do this, we need to connect our cluster with the EC2 instance, which is possible through the IAM OIDC (Open ID Connect).

In EKS, pods need to talk to other AWS Services, so AWS recommends IAM Roles.

eksctl utils associate-iam-oidc-provider --cluster my-cluster --approve

Step 6 - Create Policy and Roles 🧍🏻

To use an ALB (Application Load Balancer) with Amazon EKS, the AWS Load Balancer Controller requires:

  • IAM permissions to manage ALBs, security groups, and EC2 resources. This is typically done via IAM Role for Service Account (IRSA) with an OIDC provider.

  • Security groups:

    • The ALB gets its own Security Groups.

    • The EKS worker node SG must allow incoming traffic from the ALB SG on the application's listening ports (e.g., 80, 443).

Download the standard IAM policy from the ALB Controller documentation using one of the following commands:

For Linux or macOS:

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json

For Windows:

Invoke-WebRequest -Uri "https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json" -OutFile "iam_policy.json"

After downloading the policy, Install it using the below command -

aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json

Then, create the IAM Role service account for it -

eksctl create iamserviceaccount \
  --cluster=my-cluster \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --role-name AmazonEKSLoadBalancerControllerRole \
  --attach-policy-arn=arn:aws:iam::<your-aws-account-id>:policy/AWSLoadBalancerControllerIAMPolicy \
  --approve

You can find your AWS account ID in the console shown in the image. Replace <your-aws-account-id> with your actual ID.

This creates a service account with IAM permissions inside the cluster in the kube-system namespace, where the AWS Load Balancer Controller usually runs, and attach the policy you created earlier.

Step 7 - Deploy ALB Controller 🎛️

Using the Helm package manager, add the official AWS EKS chart index:

helm repo add eks https://aws.github.io/eks-charts

Then update it with:

helm repo update eks

After running these commands you’ll be able to install AWS Load Balancer Controller.

Now, install the Load Balancer Controller:

helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system \
  --set clusterName=my-cluster \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=us-eeast-1 \
  --set vpcId=<your-vpc-id>

You can copy your VPC ID from the networking tab in your AWS EKS Console as shown in the image below

Verify that the deployments are running:

kubectl get deployment -n kube-system aws-load-balancer-controller

Finally, check the deployed application address through the command:

kubectl get ingress -n qreator-fargate

This lists the Ingress resources in your cluster where, -n qreator-fargate is used for restricting the search to the qreator-fargate namespace, which is where the application and its Ingress resource are defined.

Note: It is a one time responsibility to create one Ingress controller for a particular cluster.

Now, go to your AWS Console, search for Load Balancers, and select the one we just created. From there, copy the DNS name (URL) displayed—this will start with http:// since we haven’t set up TLS (HTTPS) yet. Paste the URL into your browser to access the application.

Congratulations on making it through your first AWS EKS deployment! 🎊

Now, don't forget to delete and destroy your cluster and CloudFormation stack before you get charged for this project!

Use this command - eksctl delete cluster --name my-cluster --region us-east-1

Or, manually delete it from the console:

Make sure to delete not only the clusters but also the Load Balancer, policies, roles, and CloudFormation stacks to avoid getting billed.

Conclusion

Running Kubernetes in the cloud isn't just powerful—it's actually pretty approachable when you break it down into steps. By letting AWS manage the hard stuff (like the control plane and networking with Fargate), you get to focus on what matters: deploying cool apps and seeing them live with just a few commands.

Keep experimenting and building!😁

0
Subscribe to my newsletter

Read articles from Shah Abul Kalam A K directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Shah Abul Kalam A K
Shah Abul Kalam A K

Hi, my name is Shah and I'm learning my way through various fascinating technologies.