Streamline Your Kubernetes Workflow: EKS Clusters and ALB Made Easy


When it comes to managing Kubernetes on AWS, Elastic Kubernetes Service (EKS) is the go-to choice. But let’s be honest — creating an EKS cluster using the AWS Console is a painfully slow and click-heavy experience. Tons of screens, configuration wizards, and waiting.
That’s why most engineers and organizations turn to a much simpler CLI tool: eksctl
.
In this post, I’ll walk you through the steps I took to:
Create an EKS cluster using
eksctl
Deploy a sample app
Set up an AWS Application Load Balancer (ALB) Controller to make the app accessible
Let’s dive in:
Step 1: Create Your EKS Cluster (the Easy Way)
First things first: install eksctl
. It’s a CLI tool built specifically for EKS by Weaveworks and AWS.
Once you have it installed, run this command to create a cluster using Fargate (no EC2 node groups needed!):
eksctl create cluster --name demo-cluster --region us-east-1 --fargate
Heads up: This command can take several minutes as AWS spins up the cluster and associated resources.
Once it’s ready, you can verify your new cluster in the EKS dashboard under the AWS Console.
Step 2: Download the kubeconfig
to Use kubectl
To interact with your new EKS cluster using kubectl
, you’ll need to update your kubeconfig:
aws eks update-kubeconfig --name demo-cluster --region us-east-1
Now you're ready to run Kubernetes commands locally against your EKS cluster.
Step 3: Create a Fargate Profile for Your App
By default, workloads only run in the default
namespace. But it's a good idea to keep apps isolated in their own namespaces.
Let’s create a Fargate profile so our sample app can run in a game-2048
(my app name) namespace:
eksctl create fargateprofile \
--cluster demo-cluster \
--region us-east-1 \
--name alb-sample-app \
--namespace game-2048
You can verify this Fargate profile under the Compute tab in the EKS Console.
Step 4: Deploy a Sample App
Let’s deploy the 2048-game using Kubernetes manifests.
Run:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/examples/2048/2048_full.yaml
This deploys:
A deployment
A service
An ingress resource
But hold on! Although the app is running, you won’t be able to access it just yet. Why?
Why You Can’t Access the App (Yet)
Your ingress resource is in place, but it needs an Ingress Controller to actually create and manage an ALB (Application Load Balancer) in AWS.
So next, we’ll set up the AWS Load Balancer Controller in our cluster.
Step 5: Set Up IAM for the ALB Controller
The ALB Controller needs permission to talk to AWS APIs. We'll give it access using IAM roles and policies.
Associate the OIDC Provider
First, associate an IAM OIDC provider with your cluster (if not already done):
eksctl utils associate-iam-oidc-provider --cluster demo-cluster --approve
Download the IAM Policy JSON
Here, I am using an IAM policy JSON that was already created according to the documentation:
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json
Create an IAM Policy
aws iam create-policy \
--policy-name AWSLoadBalancerControllerIAMPolicy \
--policy-document file://iam_policy.json
Create an IAM Role for the Controller
eksctl create iamserviceaccount \
--cluster=demo-cluster \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--role-name AmazonEKSLoadBalancerControllerRole \
--attach-policy-arn=arn:aws:iam::<your-aws-account-id>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
Replace
<your-aws-account-id>
with your actual account ID.
Step 6: Install the ALB Controller via Helm
Make sure you have Helm installed. Then:
Add the EKS Helm repo
helm repo add eks https://aws.github.io/eks-charts
Update your Helm repos
helm repo update eks
Install the ALB Controller
helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
-n kube-system \
--set clusterName=demo-cluster \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set region=us-east-1 \
--set vpcId=<your-vpc-id>
Tip: You can get the VPC ID from your EKS cluster’s Networking tab in the AWS Console.
Step 7: Verify It’s Working
Run:
kubectl get deployment -n kube-system aws-load-balancer-controller
You should see the controller running successfully in the kube-system
namespace.
Now head to your Ingress resource in the EKS Console. You’ll notice that the controller created an AWS ALB and associated it with your app.
Try opening the URL for the ALB — your app should now be publicly accessible!
Wrapping Up
We just:
Created an EKS cluster using
eksctl
Deployed a sample app to a separate namespace using Fargate
Set up the AWS ALB Ingress Controller to expose it to the world
This is the kind of setup that modern cloud-native apps require. And now that you’ve done it once, you can automate even further, integrate it into CI/CD pipelines, and scale up your apps securely and efficiently.
Subscribe to my newsletter
Read articles from Titus James directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
