Deploying a 3-Tier Application on AWS EKS with a Custom Domain - Step-by-Step Guide for Beginners

Daksh SawhneyDaksh Sawhney
9 min read

In this blog, I’ll walk you through how I deployed a real-world three-tier application (frontend, backend, and database) on Amazon EKS (Elastic Kubernetes Service) using domain purchased from Namecheap

using industry-standard tools like Docker, Kubernetes, Ingress, and AWS Load Balancer.

Link: github/dakshsawhneyy

In today's cloud-native era, deploying scalable applications is crucial. The 3-tier architecture offers modularity and scalability, making it a preferred choice. However, integrating this with AWS EKS and configuring a custom domain presents challenges that we'll address in this guide.

This project helped me understand how production-grade systems are built, deployed, and managed at scale using DevOps practices.

Whether you’re a student like me or a beginner DevOps enthusiast, this guide will give you a clear idea of:

  • Designing a three-tier architecture for cloud

  • Containerizing and deploying apps using Kubernetes

  • Exposing services securely with ALB + Ingress

Using AWS tools like EKS, IAM, and Load Balancer Controllers


Prerequisites:

  • To follow along, you should have:

    • Basic knowledge of Kubernetes and AWS

    • An EKS cluster up and running

    • kubectl configured to access your cluster

    • AWS CLI configured

    • A registered domain (e.g., from Namecheap) [You can perform on your own ip address as well.]

Let’s dive in. 🛠️


Troubleshooting & Mistakes I Made

1. IAM Role AccessDenied

I initially didn't attach the correct policy to the IAM role used by the AWS Load Balancer Controller. It threw an AccessDenied: elasticloadbalancing:DescribeListenerAttributeserror.
Fix: Attach it ElasticLoadBalancingFullAccess to the controller's IAM role.

2. Ingress Not Working

I forgot to use the correct annotation in the Ingress manifest:

alb.ingress.kubernetes.io/scheme: internet-facing

How we are going to perform things:


Creating An AWS "t2.micro" EC2 Instance

Create an instance and ssh into the instance through your terminal
After doing ssh into instance, copy the source code from github, i.e., clone the github repo in terminal


Creating a DockerFile for React in frontend folder

FROM node:16

WORKDIR app

# copy these files in app folder
COPY package.json package-lock.json ./
RUN npm install

COPY . .

CMD ["npm","start"]

And also install docker in your instance

sudo apt-get install docker.io
sudo usermod -aG docker $USER && newgrp docker

Now build image using command

docker build -t frontend-app .

Running the docker image

# React runs on port 3000
docker run -d -p 3000:3000 frontend-app:latest

Configuring AWS CLI with our terminal

Paste this code in our terminal - for installation for aws cli

cd
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install -i /usr/local/aws-cli -b /usr/local/bin --update

Configuring AWS CLI

Create An IAM User on AWS

Provide full administrator access permission to your user

While creating, select CLI for access ID and password

Then you’ll get ID and Password

Now type in terminal

aws configure

And then paste these in your terminal

And your AWS is successfully configured with your terminal 🎉🎉


Pushing Image on AWS ECR

Create a repository in ECR

And tap CREATE

We Got our empty repository

Now we're going to push image into this repository

For pushing, click on “view push commands."

And perform all these on your terminal and successfully you have pushed image to AWS ECR🎉🎉


Now, Creating DockerFile for backend

FROM node:16

WORKDIR app

COPY package*.json ./
RUN npm install

COPY . .

CMD ["node","index.js"]    # Since it is node app - so need to run index.js

Instead of building image again, create a repo on ECR and then directly push it to that

After creating an image from the Dockerfile, push it to ECR by following the steps given in “ VIEW PUSH COMMANDS.”

And you have deployed backend image as well on ECR 🎉🎉

On local - if you want to see database is running or not

docker run -d -p 8080:8080 three-tier-backend:latest     # run the image built
docker ps    # see running processes

Then see logs on the container running

So it is unable to connect to MongoDB. Therefore we need to make it connect to Database

To fix this, when we integrate this with K8S, we will fix the issue there with the help of services, which are used to communicate between pods.


Now creating K8S cluster on AWS EKS

installing eksctl and kubectl on your terminal

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
curl -o kubectl https://amazon-eks.s3.ap-south-1.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl
chmod +x ./kubectl    # make this file elecutable
sudo mv ./kubectl /usr/local/bin    # moving this to bin so we dont need to write ./ everytime . i only can write kubectl command 
kubectl version --short --client    # check the version of installed kubectl

Now setting up cluster

eksctl create cluster --name three-tier-cluster --region ap-south-1 --node-type t2.medium --nodes-min 2 --nodes-max 2
aws eks update-kubeconfig --region ap-south-1 --name three-tier-cluster    # binding kubectl with eks cluster - so when i write kubectl get nodes, it actually provide nodes
kubectl get nodes

Now wait for 15-20 minutes because creation of cluster takes this much of average required time. 🥲

And our cluster got ready when you check in EKS


Making YAML files for MongoDB

Create a mongo Folder for keeping its yaml files inside it

Creating secrets.yaml inside mongo folder for storing mongoDB username and password

apiVersion: v1
kind: Secret
metadata:
  name: mongo-sec
  namespace: workshop
type: Opaque
data:
  username: cm9vdAo=    # root
  password: cm9vdAo=    # root

For username and password, we need to encrypt it using base64 and then paste them here

So put in the username and password, which is root from here.

Now creating service for MongoDB

// vim service.yaml

apiVersion: v1
kind: Service
metadata:
  name: mongo-svc
  namespace: workshop
spec:
  selector:
    app: mongo
  ports:
    - name: mongo-svc
      protocol: TCP
      port: 27017
      targetPort: 27017

Now, creating deployment for MongoDB

// vim deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: mongo
  namespace: workshop
  labels:
    app: mongo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: mongo
  template:
    metadata:
      labels:
        app: mongo
    spec:
      containers:
       - name: mongo
         image: mongo:latest
         ports:
         - containerPort: 27017
         resources:
           requests:
             memory: "512Mi"
             cpu: "250m"
           limits:
             memory: "1Gi"
             cpu: "500m"
         env:
          - name: MONGO_INITDB_ROOT_USERNAME
            valueFrom:
             secretKeyRef:
              name: mongo-sec
              key: username
          - name: MONGO_INITDB_ROOT_PASSWORD
            valueFrom:
             secretKeyRef:
              name: mongo-sec
              key: password

Before applying these, we need to create namespace ‘workshop.’

Write in the terminal “kubectl create namespace workshop,” and it will create namespace “workshop.”

Then hit terminal with “kubectl apply -f secrets.yaml” and then “kubectl apply -f deploy.yaml.” and “kubectl apply -f service.yaml”

and if you press kubectl get pods -n workshop

Pods are running and mongoDB deployment is created. 🎉


Now, creating Backend Deployment and Backend Service

// Creating backend-service.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api
  labels:
    app: api
spec:
  replicas: 1
  selector:
    matchLabels:
      app: api
  template:
    metadata:
      labels:
        app: api
    spec:
      containers:
      - name: api
        image: # ECR Image
        ports:
        - containerPort: 8080
        env:
          - name: MONGO_URI
            value: mongodb://mongodb-svc:27017/todo?directConnection=true    # provided service to backend so it can connect with database
          - name: MONGO_USERNAME
            valueFrom:
              secretKeyRef:
                name: mongo-sec
                key: username
          - name: MONGO_PASSWORD
            valueFrom:
              secretKeyRef:
                name: mongo-sec
                key: password

// vim backend-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: api
  namespace: workshop
spec:
  selector:
    app: api
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080

Apply Both Deployment As Well as Service


Creating Backend Frontend and FrontendService

// vim frontend-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
  workspace: workshop
  labels:
    app: frontend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: # from ECR
        ports:
        - containerPort: 3000
        env:
          - name: REACT_APP_BACKEND_URL
            value: "http://challenge.cctlds.online/api/tasks"    # cctlds.online is my domain name

// vim frontend-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: frontend-svc
  namespace: workshop
spec:
  selector:
    app: frontend
  ports:
    - protocol: TCP
      port: 3000
      targetPort: 3000

All Are Running


For making it publicly accessible, we need INGRESS CONTROLLER

Our cluster is isolated, basically, so to make it accessible to outside world, we need Ingress Controller

So for installing Ingress Controller, we need to install HELM !!

Install AWS Load Balancer

curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/install/iam_policy.json
aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy --policy-document file://iam_policy.json
eksctl utils associate-iam-oidc-provider --region=ap-south-1 --cluster=three-tier-cluster --approve
eksctl create iamserviceaccount --cluster=three-tier-cluster --namespace=kube-system --name=aws-load-balancer-controller --role-name AmazonEKSLoadBalancerControllerRole --attach-policy-arn=arn:aws:iam::626072240565:policy/AWSLoadBalancerControllerIAMPolicy --approve --region=ap-south-1

Deploy AWS Load Balancer Controller

sudo snap install helm --classic
helm repo add eks https://aws.github.io/eks-charts
helm repo update eks
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=three-tier-cluster --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller
kubectl get deployment -n kube-system aws-load-balancer-controller


For routing, we need ingress

// vim full_stack_lb.yaml

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: mainlb
  namespace: workshop
  annotations:
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}]'
spec:
  ingressClassName: alb
  rules:
    - host: challenge.cctlds.online
      http:
        paths:
          - path: /api
            pathType: Prefix
            backend:
              service:
                name: api
                port:
                  number: 8080
          - path: /
            pathType: Prefix
            backend:
              service:
                name: frontend-svc
                port:
                  number: 3000

To apply this file, type “kubectl apply -f fullstack_lb.yml.”


Issue occuring in ingress

The address is not showing in Ingress—we need to debug it.

So run

kubectl describe ingress mainlb -n workshop

Role doesnt have permissions, so we need to provide our role with permissions

Provide Full Acces of Load Balancing so it can perform operations easily without any interruption


Now run “kubectl delete -f .” to delete everything so that we can recreate

Then hit with “kubectl apply -f .”

And now address is showing in Ingress 🎉🎉🎉


Now taking this address to Namecheap (where i purchased my domain) and creating a subdomain naming challenge

We have linked the load balancer URL with the challenge. cctlds.online

  • This load balancer allows public access to ingress and this load balancer acts as an ingress controller

The ingress then routes the traffic to various services inside cluster

So have attached url of load balancer with our domain name

So let's access our website to see if it is running or not

And HURRAYY !! 🎉🎉

Backend is running fine too


After Finishing, don’t forget to delete cluster

eksctl delete cluster --name three-tier-cluster --region ap-south-1

Final Thoughts:

Deploying a real-world 3-tier MERN application on AWS EKS with a custom domain was a huge milestone for me. This project taught me how different DevOps tools and cloud services work together—from Kubernetes manifests to Ingress rules and DNS configuration.

If you're learning DevOps, cloud, or Kubernetes, this project is a great way to bring everything together in a practical scenario.

I’ll continue building on this by adding monitoring with Prometheus + Grafana, CI/CD with GitHub Actions, and security practices like network policies and secrets management. Stay tuned! 🔐🚀

0
Subscribe to my newsletter

Read articles from Daksh Sawhney directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Daksh Sawhney
Daksh Sawhney

Aspiring DevOps & DevSecOps Engineer | Automating Infra with Terraform & Ansible | Kubernetes Enthusiast | Building Scalable Pipelines 🤷‍♂️