How to Easily Launch a Three-Tier E-Commerce Application on AWS EKS 🚀

Pravesh SudhaPravesh Sudha
8 min read

💡 Introduction

Welcome to the world of clusters and automation!

In this blog, we're diving into an exciting real-world project: deploying a Three-Tier E-Commerce Robot Shop application on AWS EKS. This application is composed of eight microservices, each built with different programming languages and frameworks, and is backed by two databases—mirroring the kind of architecture you'd typically find in a production-level environment.

Whether you're a DevOps enthusiast or someone preparing for real company-level projects, this hands-on walkthrough will give you a solid understanding of deploying complex applications on Kubernetes using AWS EKS.

So without further ado, let’s roll up our sleeves and get started.


💡 Pre-Requisites

Before diving into the deployment process, make sure you have the following tools installed and configured on your system. These will help us interact with AWS, manage our Kubernetes cluster, and provision resources efficiently:

  • Docker: For building and testing the microservices locally before pushing them to the cloud.

  • AWS CLI: To authenticate and interact with your AWS account from the command line.

  • eksctl: A simple CLI tool for creating, managing, and deleting EKS clusters.

  • Helm: The package manager for Kubernetes, which simplifies the deployment of applications and services on the cluster.

Once these tools are set up, we’ll be ready to provision infrastructure and deploy our application seamlessly on AWS EKS.


💡 Understanding the Application

Before we deploy to AWS EKS, let’s take a closer look at the application we’ll be working with.

The complete codebase is available on GitHub:
👉 three-tier-architecture-demo

This project, Stan's Robot Shop, is a sample microservices-based e-commerce application designed for testing container orchestration, observability, and cloud-native deployment strategies. It mimics a typical three-tier architecture (Frontend, Logic, and Backend) using a wide variety of technologies and languages, making it a great learning resource for real-world DevOps practices.

🔧 Tech Stack

This application includes services built using:

  • Frontend: AngularJS (1.x)

  • Backend / Logic Layer:

    • Node.js (Express)

    • Java (Spring Boot)

    • Python (Flask)

    • Go (Golang)

    • PHP (Apache)

  • Databases / State:

    • MongoDB

    • MySQL (with MaxMind data)

    • Redis (in-memory store using StatefulSets in EKS, backed by EBS volumes)

  • Others:

    • RabbitMQ (for messaging)

    • Nginx (reverse proxy)

🛒 How the App Works

Stan’s Robot Shop simulates a real e-commerce workflow:

  1. Homepage → Navigate to the landing page.

  2. Register/Login → Create an account and log in.

  3. Catalog → Browse available robots.

  4. Cart → Add items to your shopping cart.

  5. Checkout → Complete the purchase.

  6. Payment → Finalize the order and receive an Order ID.

Instead of bundling everything into a monolithic codebase, the app is broken down into independent microservices such as login, register, catalog, cart, payment, etc. This modular design allows teams to update or scale features individually without risking the entire application.


🧪 Local Testing with Docker Compose

Let’s quickly spin it up locally using Docker to understand the app’s workflow before we deploy it to AWS.

Step 1: Clone the Repository

git clone https://github.com/iam-veeramalla/three-tier-architecture-demo
cd three-tier-architecture-demo/

Step 2: Pull the Docker Images

Make sure your Docker daemon is running, then pull the necessary images from Docker Hub:

docker-compose pull

Step 3: Start the Application

docker-compose up

Step 4: Open in Browser

Visit http://localhost:8080 — you should see the Robot Shop homepage live!

Now that the application is up and running locally, go ahead and register a user, add some robots to your cart, and complete a sample order. This quick local test gives you confidence that the app is working as expected before we take it to the cloud.


💡Setting up The Cluster

Now that we've tested the application locally, it’s time to deploy it on a fully managed Kubernetes cluster using Amazon EKS.

🔧 Step 1: Create the EKS Cluster

We'll start by creating an EKS cluster using eksctl. This tool simplifies EKS cluster creation and management.

eksctl create cluster --name demo-cluster-three-tier-1 --region us-east-1

This command provisions an EKS cluster named demo-cluster-three-tier-1 in the us-east-1 (North Virginia)region. The process may take around 10–15 minutes—grab a coffee while it spins up!

🔐 Step 2: Configure IAM OIDC Provider

Amazon EKS uses an OIDC provider to allow your Kubernetes service accounts to interact securely with other AWS services. Let’s set that up:

export cluster_name=demo-cluster-three-tier-1
oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f 5)

Now verify if the OIDC provider is already associated:

aws iam list-open-id-connect-providers | grep $oidc_id | cut -d "/" -f4

If the above command doesn't return an output, associate the OIDC provider manually:

eksctl utils associate-iam-oidc-provider --cluster $cluster_name --approve

🌐 Step 3: Set Up the AWS Load Balancer Controller

To expose your microservices to the internet, we’ll deploy an Application Load Balancer (ALB) using AWS’s ALB Ingress Controller.

✅ Download IAM Policy for ALB Controller

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json

✅ Create the IAM Policy

aws iam create-policy \
  --policy-name AWSLoadBalancerControllerIAMPolicy \
  --policy-document file://iam_policy.json

✅ Create IAM Role (replace <your-aws-account-id> with your actual AWS Account ID)

eksctl create iamserviceaccount \
  --cluster=demo-cluster-three-tier-1 \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --role-name AmazonEKSLoadBalancerControllerRole \
  --attach-policy-arn=arn:aws:iam::<your-aws-account-id>:policy/AWSLoadBalancerControllerIAMPolicy \
  --approve

✅ Add Helm Repository & Deploy ALB Controller

helm repo add eks https://aws.github.io/eks-charts
helm repo update eks

Now install the controller using Helm (replace <your-vpc-id> and <region> accordingly):

helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system \
  --set clusterName=demo-cluster-three-tier-1 \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --set region=us-east-1 \
  --set vpcId=<your-vpc-id>

Verify the deployment:

kubectl get deployment -n kube-system aws-load-balancer-controller

Once the ALB controller is running, we’re ready to support ingress traffic to our micro services.

💾 Step 4: Configure EBS CSI Driver (Persistent Storage)

To handle persistent storage—especially for services like MongoDB, MySQL, and Redis—we'll enable the EBS CSI driver.

✅ Create IAM Role for the CSI Plugin

eksctl create iamserviceaccount \
    --name ebs-csi-controller-sa \
    --namespace kube-system \
    --cluster demo-cluster-three-tier-1 \
    --role-name AmazonEKS_EBS_CSI_DriverRole \
    --role-only \
    --attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
    --approve

✅ Install the EBS CSI Addon (replace <AWS-ACCOUNT-ID>)

eksctl create addon \
  --name aws-ebs-csi-driver \
  --cluster demo-cluster-three-tier-1 \
  --service-account-role-arn arn:aws:iam::<AWS-ACCOUNT-ID>:role/AmazonEKS_EBS_CSI_DriverRole \
  --force

Once this step is complete, your cluster will be ready to support stateful applications requiring persistent volumes backed by AWS EBS.

With the cluster, OIDC, ALB Controller, and storage plugins all set up, we're now fully equipped to deploy our micro services into the Kubernetes cluster.


🚀 Deploying the Robot Shop Application on EKS

With the EKS cluster ready and all essential components in place, it's time to deploy our microservice application — Stan's Robot Shop — using Helm.

🛠 Step 1: Navigate to the Helm Directory

First, move into the Helm chart directory within the cloned GitHub repository. This directory contains all the Kubernetes resource definitions required to deploy the app.

cd three-tier-architecture-demo/EKS/helm

📦 Step 2: Install the Application with Helm

Now, create a dedicated namespace for the application and install the Helm chart:

kubectl create ns robot-shop
helm install robot-shop --namespace robot-shop .

This will provision all the services, deployments, and configurations necessary for the Robot Shop application to run inside your EKS cluster.

🌐 Step 3: Expose the Application via Ingress

Once all the pods are up and running, we need to expose the application to the external world. This is done via an Nginx ingress controller, defined in the provided ingress.yaml file:

kubectl apply -f ingress.yaml

This command will create an ingress resource that the AWS ALB Controller will detect and use to create an Application Load Balancer in your default VPC.

⚠️ Note: It may take 5–10 minutes for the Load Balancer to be fully provisioned. You can monitor its status by navigating to the EC2 dashboard → Load Balancers in the AWS console.

Once the Load Balancer enters the "active" state, grab the DNS name and open it in your browser — and voilà! 🎉 You should see the Robot Shop application live and running on AWS.

🧹 Step 4: Clean Up Resources

After you've explored and tested the application, don't forget to clean up the AWS resources to avoid unexpected charges:

eksctl delete cluster --name demo-cluster-three-tier-1 --region us-east-1

This will tear down the entire cluster along with all associated resources.


🧾 Conclusion

In this blog, we successfully deployed a Three-Tier E-Commerce Robot Shop Application on an AWS EKS Cluster, taking you through the complete process — from local testing with Docker to cloud deployment using Helm and Kubernetes. Along the way, we configured essential AWS services like the ALB Ingress Controller and EBS CSI Driver, giving you hands-on experience with real-world production-grade infrastructure.

This project wasn’t just a simple deployment — it was an opportunity to understand how microservices, container orchestration, and cloud-native architecture come together in modern DevOps workflows. Whether you're preparing for your next interview or sharpening your cloud skills, this exercise is a powerful addition to your DevOps toolkit.

If you found this blog helpful, feel free to connect with me on my socials for more hands-on, beginner-friendly content around DevOps, Cloud, and Automation:

Till then, Happy Learning!

– Pravesh Sudha 🚀

0
Subscribe to my newsletter

Read articles from Pravesh Sudha directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Pravesh Sudha
Pravesh Sudha

Bridging critical thinking and innovation, from philosophy to DevOps.