Deploying a Three-Tier Microservices Architecture on AWS EKS with Helm


Introduction
In this guide, we will deploy Stan’s Robot Shop, a sample e-commerce application that demonstrates a three-tier microservices architecture, onto an Amazon EKS (Elastic Kubernetes Service) cluster using Helm charts. This application consists of multiple microservices (about eight services) and a couple of databases, mimicking a real production-style environment. We’ll cover the project’s architecture, the prerequisites and AWS infrastructure setup (IAM roles, VPC, load balancer controller, storage driver), how to install the Helm chart on EKS, and finally how to access and optionally load-test and monitor the running application. The goal is to provide a comprehensive, step-by-step walkthrough that you can follow to get the entire system up and running on AWS.
Project Overview and Architecture
Stan’s Robot Shop is a cloud-native application built as a three-tier architecture: a presentation layer (web UI), an application/logic layer (multiple microservices for various backend functions), and a data layer (databases and stateful storage). Instead of a monolith, the app is broken into independent microservices such as user login/registration, product catalogue, shopping cart, payment, shipping, and others. Each microservice is implemented in different languages/frameworks, showcasing a polyglot tech stack:
Frontend: An AngularJS-based web UI (served by an Nginx web server).
Backend Services (Logic Layer): Multiple services written in Node.js (Express), Java (Spring Boot), Python (Flask), Go, and PHP, each handling a specific domain (e.g. cart, user, catalog, orders, etc.). These services communicate internally (for example, via REST or messaging).
Datastores (Data Layer): MongoDB and MySQL act as the primary databases for persisting data (e.g. product info, user data), and Redis is used as an in-memory datastore (for caching or session/cart data). There’s also a RabbitMQ service for messaging between parts of the system, enabling asynchronous processing.
This multi-service setup with diverse technologies reflects a realistic e-commerce application. The microservices architecture allows each component to be scaled or updated independently without affecting the whole system. In our deployment, all these services will run in a Kubernetes cluster (EKS), each as a Deployment or StatefulSet with corresponding Services. The separation of the web tier, application logic tier, and data tier – all deployed on EKS – exemplifies the resilient three-tier design.
Prerequisites
Before we begin, ensure you have the following prerequisites in place:
AWS Account: You will be deploying resources to AWS, so you need an AWS account with sufficient permissions (Administrator or a role that can create EKS clusters, IAM roles, VPCs, etc.).
AWS CLI: Installed and configured with your AWS credentials. This lets you interact with AWS from the command line (e.g., to fetch cluster details or configure credentials).
kubectl: The Kubernetes CLI for interacting with the cluster. Ensure you have kubectl installed (matching your cluster’s Kubernetes version) and on your PATH. You can install it via Amazon’s instructions (for example, by downloading the binary from an S3 URL and making it executable).
eksctl: A CLI tool to create and manage EKS clusters easily. We'll use eksctl to create the Kubernetes cluster and to set up some addons. Install eksctl v**≥0.148.0** (latest version) on your system.
Helm: The package manager for Kubernetes, used to deploy the application’s Helm chart. Install Helm v3 on your system.
Docker (optional): If you want to test locally or build images, Docker is useful. (For this deployment, you won’t need to build images manually because pre-built images are available on Docker Hub.)
Using AWS CloudShell: Instead of setting up a local CLI environment, you can use AWS CloudShell (a browser-based terminal in the AWS Console). CloudShell comes with AWS CLI pre-configured and runs in your AWS account. You may need to install eksctl
and helm
inside CloudShell (via curl scripts) since they might not be pre-installed. Using CloudShell is convenient as it already has credentials and network access to AWS. Whether you use CloudShell or your local machine, make sure the above tools are installed and configured before proceeding.
EKS Cluster Setup
Now we will set up an EKS Kubernetes cluster to host the application. This includes creating the cluster itself and then configuring necessary add-ons such as IAM OIDC provider, the AWS Load Balancer Controller (for ingress), and the EBS CSI driver (for persistent storage). The following steps assume you’re operating in the us-east-1 region (North Virginia); you can change the region and names as needed.
1. Create an EKS Cluster with eksctl
First, create the EKS cluster using eksctl
. This command will provision the control plane, worker nodes (using default node group), networking (VPC and subnets), and other required components in AWS automatically:
eksctl create cluster --name demo-cluster-three-tier-1 --region us-east-1
This will spin up an EKS cluster named demo-cluster-three-tier-1
in the us-east-1
region. By default, eksctl will create a new VPC with subnets across availability zones for your cluster, and attach the necessary IAM roles to the nodes. The creation process can take 10-15 minutes, so wait until it finishes. Once done, eksctl
will update your kubeconfig (usually ~/.kube/config
) so that kubectl
is pointed at the new cluster. You can verify the cluster by running a simple command, for example:
kubectl get nodes
to see if the nodes are up (this requires kubectl configured and in the same environment where eksctl ran).
2. Enable IAM OIDC Provider for the Cluster
Amazon EKS uses an OpenID Connect (OIDC) provider to allow Kubernetes service accounts to assume IAM roles. This is needed for certain cluster add-ons (like the ALB controller and CSI driver) to interact with AWS resources securely using IAM. We will ensure the IAM OIDC provider is associated with our cluster.
Check and Associate OIDC: Use the AWS CLI and eksctl to set up the provider if not already present:
export cluster_name="demo-cluster-three-tier-1"
# Get the OIDC issuer ID for the cluster
oidc_id=$(aws eks describe-cluster --name $cluster_name --query "cluster.identity.oidc.issuer" --output text | cut -d '/' -f5)
# Check if OIDC provider is already associated
aws iam list-open-id-connect-providers | grep "$oidc_id" || echo "OIDC provider not found"
If the above check returns nothing (no OIDC provider found), associate one with the cluster by running:
eksctl utils associate-iam-oidc-provider --cluster "$cluster_name" --approve
This command configures the IAM OIDC provider for your EKS cluster. With OIDC in place, Kubernetes service accounts can later be linked to IAM roles via IAM service account annotations – a requirement for the AWS Load Balancer Controller and EBS CSI driver.
3. Install the AWS Load Balancer Controller (ALB Ingress Controller)
To expose the microservices to the internet, we’ll use an Application Load Balancer (ALB) Ingress Controller for AWS EKS. This controller will watch Ingress resources in the cluster and create/manage an AWS ALB accordingly, allowing external traffic to reach our services. The ALB controller needs specific IAM permissions and will run as a deployment in our cluster. We’ll set it up step by step:
a. Create IAM Policy for ALB Controller: AWS provides a pre-defined IAM policy JSON for the ALB controller. Download this policy document and create an IAM policy from it:
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
The first command fetches the IAM permissions policy for the AWS Load Balancer Controller, and the second command creates a new IAM policy called AWSLoadBalancerControllerIAMPolicy
in your AWS account using that document.
b. Create an IAM Role for the ALB Controller: Next, we create an IAM role and Kubernetes service account for the ALB controller to use. We can do this in one step with eksctl, which will create an IAM role and associate it with a K8s service account named aws-load-balancer-controller
in the kube-system
namespace:
eksctl create iamserviceaccount --cluster=$cluster_name --namespace=kube-system --name=aws-load-balancer-controller \
--role-name "AmazonEKSLoadBalancerControllerRole" \
--attach-policy-arn=arn:aws:iam::<YOUR_AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \
--approve
Replace <YOUR_AWS_ACCOUNT_ID>
with your AWS account number. This command creates an IAM role AmazonEKSLoadBalancerControllerRole
with the policy we made, and links it to the aws-load-balancer-controller
service account in the cluster. The --approve
flag automatically applies the changes. Essentially, this grants the ALB Ingress Controller pod the necessary AWS permissions (via IAM) to create and manage load balancers, security groups, etc., on our behalf.
c. Deploy the ALB Controller via Helm: Now we deploy the controller into the cluster using Helm. First, add the AWS EKS Helm charts repository and update it, so we have access to the ALB controller chart:
helm repo add eks https://aws.github.io/eks-charts
helm repo update
Next, install the AWS Load Balancer Controller chart from that repo:
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system \
--set clusterName=$cluster_name \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--set region=us-east-1 \
--set vpcId=<YOUR_VPC_ID>
In the above command, make sure to replace <YOUR_VPC_ID>
with the ID of the VPC where your EKS cluster is running (if you used eksctl without special configuration, you can find the VPC ID via aws eks describe-cluster --name $cluster_name --query "cluster.resourcesVpcConfig.vpcId"
). We also specify the cluster’s name and region. We set serviceAccount.create=false
and serviceAccount.name=aws-load-balancer-controller
because we have already created the service account with an IAM role in the previous step. The Helm chart will then use that existing service account.
This Helm installation will create a deployment called aws-load-balancer-controller
in the kube-system
namespace. After a minute, verify that the controller pod is running:
kubectl get deployment -n kube-system aws-load-balancer-controller
You should see the deployment with at least 1/1 pod available, indicating the ALB Ingress Controller is up. At this point, the cluster is ready to create AWS Load Balancers whenever we define an Ingress resource.
4. Enable the EBS CSI Driver (Persistent Storage)
Our application includes stateful services (MongoDB, MySQL, Redis), which require persistent storage in Kubernetes. AWS EKS supports the EBS CSI Driver to provision Amazon EBS volumes for Kubernetes PersistentVolumeClaims. We need to install this driver (as an EKS add-on) so that our database pods can have durable storage. Before installing, we must create an IAM role for the CSI driver.
a. Create IAM Role for EBS CSI: We will create an IAM role that grants permissions for EBS volume operations, and associate it with the EBS CSI driver’s service account:
eksctl create iamserviceaccount --name ebs-csi-controller-sa --namespace kube-system --cluster $cluster_name \
--role-name "AmazonEKS_EBS_CSI_DriverRole" --role-only \
--attach-policy-arn arn:aws:iam::aws:policy/service-role/AmazonEBSCSIDriverPolicy \
--approve
This command creates an IAM role named AmazonEKS_EBS_CSI_DriverRole
with the Amazon-managed policy AmazonEBSCSIDriverPolicy
attached (which allows Kubernetes to manage EBS volumes). We used --role-only
because EKS Add-ons will create the service account for us; we just need the role ready.
b. Install the EBS CSI Driver Add-on: Now enable the EBS CSI driver for your cluster:
eksctl create addon --name aws-ebs-csi-driver --cluster $cluster_name \
--service-account-role-arn arn:aws:iam::<YOUR_AWS_ACCOUNT_ID>:role/AmazonEKS_EBS_CSI_DriverRole --force
This command tells EKS to install the official aws-ebs-csi-driver add-on in the cluster, using the IAM role we just created for its service account. After a short time, the EBS CSI driver pods will be running in kube-system
namespace (you can check with kubectl get pods -n kube-system -l app=ebs-csi-controller
). Once this is complete, your cluster can dynamically provision EBS volumes for any PersistentVolumeClaims that our application’s StatefulSets might create. In summary, the cluster is now configured with everything needed: an OIDC provider, the ALB ingress controller, and the EBS CSI driver – fully ready to host our microservices application.
Deploying the Microservices with Helm
With the infrastructure in place, we can deploy the Robot Shop application onto the EKS cluster using Helm. The GitHub repository we cloned contains a pre-built Helm chart that defines all the Kubernetes objects (Deployments, Services, StatefulSets, etc.) for the 3-tier app. We will use that chart to install the whole stack in one go.
1. Clone the Repository (if not already done)
If you haven’t already, clone the project repository and navigate to the Helm chart directory:
git clone https://github.com/AbhishekTechie10/three-tier-architecture-demo.git
cd three-tier-architecture-demo/EKS/helm
This repository contains the code and manifests for the Robot Shop application. In particular, the EKS/helm
directory contains a Chart.yaml
, templates, and an ingress.yaml
file – everything needed to deploy on EKS with one Helm command. (If you already cloned the repo earlier for local testing, just navigate to EKS/helm
.)
Note: The Helm chart is configured to use pre-built Docker images for all services (hosted on Docker Hub), so you do not need to build or push any images manually. The default values should work out-of-the-box for a demo deployment.
2. Install the Helm Chart on EKS
We’ll deploy all components into a dedicated Kubernetes namespace (for cleanliness). Create the namespace and install the Helm release:
kubectl create namespace robot-shop
helm install robot-shop . -n robot-shop
This tells Helm to install the chart in the current directory (.
) into the robot-shop
namespace, with the release name robot-shop
. Helm will then create all the necessary objects: deployments for each microservice, stateful sets for database components, services for internal communication, etc. After running the above commands, Helm’s output should indicate it created a number of Kubernetes resources. Indeed, this single step provisions “all the services, deployments, and configurations necessary for the Robot Shop application to run inside your EKS cluster.”
Give it a minute to let Kubernetes pull the container images and start up all the pods. You can run the following to check the status of the pods:
kubectl get pods -n robot-shop
All pods should eventually show as Running (or Completed for any one-time jobs). If any pods are in CrashLoopBackOff
, you may need to check logs, but in a normal scenario they should start correctly since we’re using known-good images.
3. Configure Ingress and Load Balancer (Expose the Application)
At this point, the application is running internally in the cluster, but we need to make it accessible from the outside. To do that, we’ll create a Kubernetes Ingress resource which the AWS ALB Controller will pick up and create an external Application Load Balancer for. The Helm chart directory provides an ingress.yaml
file with the necessary Ingress configuration (it likely defines path routing to the web
service, which is the frontend).
Apply the ingress manifest:
kubectl apply -f ingress.yaml
This creates an Ingress resource (in the robot-shop
namespace) that maps incoming HTTP requests to the web
service of our application. The AWS Load Balancer Controller we installed will notice this and immediately start provisioning an ALB to satisfy the Ingress. Within a few minutes, a new ALB will be created in your AWS account (you can check the EC2 > Load Balancers section in the AWS console to watch its progress). The Ingress controller will also create a corresponding Kubernetes Service of type LoadBalancer
under the hood, but you primarily interact with the Ingress itself.
It may take 5-10 minutes for the Application Load Balancer to be fully provisioned and in an “active” state. You can verify that the Ingress has been assigned an address by running:
kubectl get ingress -n robot-shop
Once the STATUS shows an address (which will be an AWS ALB DNS name), your application is available on that URL. For example, you might see an address like k8s-robotshop-ABCDEFGHI.us-east-1.elb.amazonaws.com
. You can also get the ALB’s DNS by checking the AWS console.
4. Access the Application
When the ALB is ready, copy its DNS name and try opening it in your web browser (it will be serving on HTTP port 80 by default). You should see the Robot Shop storefront load in your browser, confirming that the frontend service is reachable. From there, you can navigate the site: create a user account, browse products, add items to your cart, and perform a checkout. These actions will trigger the various microservices (user, catalogue, cart, order, payment, etc.) to work together. If all went well, the order checkout will complete and you’ll get an order confirmation, indicating the full path through the three tiers is functioning.
At this stage, you have successfully deployed the full microservices-based three-tier application on AWS EKS. The ALB is handling external traffic and routing through the Ingress to the web
frontend, which in turn calls internal services. Kubernetes is managing the scaling and networking of all those components behind the scenes.
Tip: You can also verify all Services are properly exposed by running kubectl get svc -n robot-shop
. Most services are ClusterIP (internal only), but the ingress
/ALB provides the external access point. If you described the Ingress (kubectl describe ingress -n robot-shop
), you would see annotations that the ALB controller added (like AWS load balancer ARNs, etc.), and the rules mapping /*
path to the web service.
5. (Optional) Clean Up Resources
If you are done experimenting, remember to delete the AWS resources to avoid ongoing charges. The easiest way is to delete the EKS cluster, which will also delete node instances and load balancers:
eksctl delete cluster --name demo-cluster-three-tier-1 --region us-east-1
This will tear down the entire cluster and associated resources (it may take a few minutes to complete). Ensure that the ALB, EC2 instances, and any EBS volumes created are removed. Double-check AWS console for any remaining pieces to delete manually if needed. Only do this clean-up step when you no longer need the cluster or application running.
Optional: Load Testing and Monitoring
After deployment, you might want to generate some traffic on the application or integrate monitoring to observe its behavior. Stan’s Robot Shop comes with a couple of additional components for these purposes.
Load Generation: The repository includes a utility in the
load-gen
directory to simulate user traffic to the app. This load generator is a Python/Locust-based tool that can continuously hit the various endpoints of the Robot Shop (simulating users browsing and purchasing). It is not deployed by default. You can run it manually in a Docker container, or even deploy it into your EKS cluster as a Job/Deployment. (The repo provides an example Kubernetes descriptor for running the load generator in theK8s/
folder.) Using this, you can apply load to the system and test its resilience. For example, Locust can simulate multiple concurrent users adding items to carts and checking out.Application Monitoring & Metrics: The microservices are instrumented for observability. In fact, the application was originally built to showcase Instana APM, and each service has Instana tracing agents included (if you have an Instana instance and configure the agent key, you could see end-to-end distributed traces in Instana’s dashboard). For a more open-source approach, certain services also expose Prometheus metrics endpoints. Notably, the cart service and payment service have HTTP endpoints at
/metrics
which provide Prometheus-formatted metrics. These include counters such as the total number of items added to carts and purchased, and histograms of cart sizes and values. If you set up a Prometheus server (either in-cluster or via AWS Managed Prometheus) to scrape these endpoints, you can monitor the app’s performance and usage. You could then visualize the data with Grafana or CloudWatch. Additionally, you can monitor cluster-level metrics (pod CPU/memory, etc.) via CloudWatch Container Insights or Prometheus as well.
By combining load generation and monitoring, you can observe how the system behaves under load – for example, watching the metrics for database usage or throughput as Locust drives traffic. This can be a great learning exercise in scaling and performance tuning for a microservices architecture.
Conclusion: We have successfully deployed a complex multi-service application using a three-tier architecture on AWS EKS. We set up the necessary AWS infrastructure (IAM roles, ALB ingress controller, CSI storage) and used Helm to package and install the entire stack in one command. The result is a working e-commerce site with a robust architecture: a presentation tier served via ALB, a set of independent microservices (logic tier) running in Kubernetes, and persistent backing stores on AWS (data tier). This demonstration not only shows how to deploy an app with many components on Kubernetes, but also how AWS services like ALB and EBS integrate with EKS to support real-world applications. Feel free to explore further – for instance, scaling the deployments, introducing faults to test resiliency, or extending the monitoring – to deepen your understanding of running microservices on EKS. Happy deploying!
Subscribe to my newsletter
Read articles from Abhi Nikam directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
