How I deployed a WebSocket application on AWS EKS.

Harbinder SinghHarbinder Singh
10 min read

Final Project Repo: https://github.com/Harbinder04/tic-tac-toe-websocket-app

Introduction

In this article, we’ll explore how to deploy a WebSocket application on AWS EKS. I’ll walk you through my learning journey—from knowing almost nothing about EKS to actually getting an app up and running.

I’m assuming you’re already familiar with Docker and have a basic understanding of Kubernetes. If not, no worries—we’ll cover the important parts along the way.

So grab your coffee, H2O, or whatever keeps you going, and let’s dive into what I’ve learned. While this blog is a personal learning log, I’m sure beginners will find it helpful too.

Understanding the Basics

What is AWS EKS?

If you are into k8s, you may have heard of two ways to run a cluster:

  1. Self-managed Kubernetes cluster

  2. Cloud-managed Kubernetes cluster

AWS EKS (Elastic Kubernetes Service) falls into the second category—a cloud-managed Kubernetes solution. This means AWS will abstract all the complexity of operating Kubernetes clusters and you can build and deploy your application faster and more securely without doing much manual effort to manage the k8s control plane and focus more on building and deploying your applications.

AWS provides two main ways to use EKS:

  • EKS Standard Mode - You manage the worker nodes yourself (typically EC2 instances), good for a system where you want more control over the underlying infrastructure.

  • EKS Auto Mode - You don’t manage any nodes; AWS handles it for you. It’s easy to use and has less friction for beginners.

For this project, I’m using the Standard Mode.

Image of standard EKS cluster taken from aws official docs.

Docker and Kubernetes: (Scroll to next section if you are already familiar)

Docker:

Okay, so a little refresh on Docker. Docker provides the ability to package and run an application in a loosely isolated environment called a container. Containers are built on the basis of an image, where the image is a read-only template with instructions for creating a Docker container.

🧠 Analogy to understand Docker:

🧳 Docker is like a suitcase (container) for your app

  • Imagine you’re going on a trip.

  • You pack everything you need (clothes, charger) in a suitcase.

  • Wherever you go — hotel, friend's place, airport — you open the suitcase and use your stuff, without worrying about what's available there.


🔄 Now relate it to Docker:

  • Your app = your clothes.

  • Your app’s dependencies, config, runtime = charger, toothpaste, shoes.

  • The Docker container = the suitcase.

  • The host/server = the hotel/airport/friend’s place.

No matter where you run your container (Linux, Windows, cloud), it works the same - because it has everything packed inside.


Kubernetes:

To be honest, it’s very hard to describe Kubernetes in just one blog. But at a high level, Kubernetes (K8s) is used to manage containerized applications.

Without Kubernetes, you would have to manually manage the containers that run your apps, making sure they stay up and running. For example, if a container crashes, you'd need to restart it yourself. Wouldn’t it be easier if a system did that automatically?

That’s exactly what Kubernetes does — it helps you run your services in a resilient and self-healing way.

In this blog, we’ll look at 3 key components in Kubernetes that are especially important when deploying an application:

  1. Pod: Pod is the smallest unit of k8s. Think of it as a wrapper around your container—we usually run one application (one Docker image) per Pod. Each Pod has its own IP address, which changes when the Pod is re-created. In simple terms, a Pod is the place where your app lives and runs-usually by pulling the Docker image from a container registry like Docker Hub or Amazon ECR

  2. Service: Kubernetes Pods are created and destroyed to match the desired state of your cluster. When the Pods are destroyed and re-created, they will get a new IP. This creates a big problem: How do we reach our application reliably if the IP keeps changing? To solve this problem, we use svc(service). The service provides us stable IP address, and it acts as a bridge between a Pod and the service. The Service and Pod lifecycles are independent of each other. So, they work independently. Service further is of 2 types:

    i) Internal Service - This kind of service is only accessible inside the cluster.

    ii) External Service - We can directly route our traffic to the service for that purpose we use an external service.

Ingress: Ingress is used to route traffic to the required service based on URL paths. Ingress is used instead of directly routing traffic to services because it offers significant advantages over direct routing. Like a single entry point, load balance using an ingress controller (ALB or Nginx) and enhances security, etc.

Here is a very nice image to clarify the flow from the official k8s docs:


WebSocket Application Overview

We’re building a simple Tic-Tac-Toe game with a Node.js backend and a React frontend. The core idea is to allow two players to play the game in real-time, so for that, we need a constant connection between the client and the server.

But before we dive into the implementation, let’s take a quick look at “What is a WebSocket ?“:

WebSocket is a full-duplex, bi-directional communication protocol that works over a single, long-lived TCP connection, which is initially established using an HTTP handshake. Unlike traditional HTTP, where the client has to repeatedly ask the server for updates (polling), WebSocket allows the server to push data to the client in real-time.

What are we doing in the tic-tac-toe game?

In our project, the backend exposes two main routes:

  • /health – A simple endpoint to check if the backend is running (used for health checks in Kubernetes).

  • /api – This is where the WebSocket connection is established.

Once the connection is open, we handle different types of client actions using a switch-case-based message handler. Here's what it looks like at a high level:

  • handleCreateGame – Called when a player starts a new game.

  • handleJoinGame – Triggered when another player joins the game.

  • handleMove – Called every time a player makes a move.

  • handleExitGame – When someone leaves or exits the game.

Each of these actions is handled on the server, and updates are pushed in real-time to the players connected to that game room.


🐳Dockerizing the Backend and Frontend

The backend Dockerfile is simple and clean since we’re not doing anything fancy like caching layers or environment-based optimizations. It’s already a small image.

FROM node:22-alpine
WORKDIR /app
COPY package-lock.json package.json ./
RUN npm install
COPY . .
EXPOSE 8080
CMD ["npm", "start"]

For the frontend, we’re using a two-stage build to produce a smaller and optimized image. This also helps us inject the backend URL at build time so the app knows where to open a WebSocket connection.

FROM node:22-alpine AS build
WORKDIR /app
COPY package.json package-lock.json nginx.conf ./
RUN npm install
COPY . .
# Accept build argument
ARG VITE_BACKEND_URL
ENV VITE_BACKEND_URL=$VITE_BACKEND_URL

RUN npm run build

FROM nginx:alpine
RUN rm -rf /usr/share/nginx/html/*
COPY --from=build /app/dist /usr/share/nginx/html
COPY --from=build /app/nginx.conf /etc/nginx/conf.d/default.conf

EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]

⚠️ Important: Since the backend URL is injected at build time, make sure to pass it like this when building:

docker build --build-arg VITE_BACKEND_URL=wss://mygame.mydomain.tech/api -f Dockerfiles/Dockerfile.frontendApp -t username/tic-tac-toe-frontend:v1 ./frontend

Then push these images to your Docker Hub.


Steps to Set Up an EKS Cluster

Before deploying our app, we need a working EKS cluster. Let’s go through the setup process step by step.

👤 1. Create an IAM User with Permissions

First, log in to your AWS account and create a new IAM user.

💡 Tip: In a production environment, you'd assign fine-grained permissions (using custom IAM policies). But for learning purposes, we’ll keep it simple and give the user AdministratorAccess.

Once the user is created:

  • Generate an Access Key ID and Secret Access Key.

  • Keep them safe or download the .csv file—you’ll need them to authenticate your AWS CLI.

🛠️ 2. Install Required Tools

We’ll be setting up EKS using the CLI, which gives you better control and can be automated later.

Make sure the following tools are installed on your system:

  • AWS CLI – To interact with AWS services from the terminal
    Install guide

  • kubectl – The Kubernetes command-line tool to interact with the cluster
    Install guide

  • eksctl – A CLI tool to create and manage EKS clusters easily
    Install guide

Now let’s begin creating and deploying our app on EKS.

🛞 3. Command to create a cluster

eksctl create cluster \
  --name game-cluster \
  --version 1.33 \
  --region ap-south-1 \
  --nodegroup-name linux-nodes \
  --node-type t2.small \
  --nodes 2 \
  --with-oidc

Note: This command will auto-configure the kubeconfig file.

4. Create an IAM service account (iamserviceaccount)

Use eksctl create iamserviceaccount to bind IAM roles to service accounts.

eksctl create iamserviceaccount \ 
--cluster=play-cluster \ 
--namespace=kube-system \ 
--name=aws-load-balancer-controller \ 
--attach-policy-arn=arn:aws:iam::<AWS_ACCOUNT_ID>:policy/AWSLoadBalancerControllerIAMPolicy \ 
--override-existing-serviceaccounts \ 
--region ap-south-1 \ 
--approve

5. Now we need to install the AWS Load Balancer

Steps to install AWS Load Balancer

6. Now we have to get a TLS/SSL certificate from AWS Certificate Manager

Navigate to the AWS Certificate Manager.
1. Click on Request.

  1. Select Request a public certificateNext.

  2. Type your domain name in the Domain Name section and keep all the default settings.

  3. Now, add a CNAME record in your DNS provider, mapping the given value provided by the aws.
    It will take a few minutes and your certificate will be ready to use.

  4. Add your certificate in the ingress.yaml file.

       annotations:
         kubernetes.io/ingress.class: alb
         alb.ingress.kubernetes.io/scheme: internet-facing
         alb.ingress.kubernetes.io/target-type: ip
         ## add you certificate here
         alb.ingress.kubernetes.io/certificate-arn:
           { { .Values.ingress.certificateArn } }
         alb.ingress.kubernetes.io/healthcheck-path: /health
         alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
         alb.ingress.kubernetes.io/ssl-redirect: '443'
         alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=86400
    

👍Now our cluster is ready.

7. Now Start Deploying Your Application on EKS

  1. Start by applying the frontend and backend deployment manifests present in the github repo given above:
kubectl apply -f frontend-deployment.yml
kubectl apply -f backend-deployment.yml

These create Pods and ReplicaSets for your application.


  1. To make sure everything’s running correctly, you can port forward a Pod locally.
kubectl port forward pod/pod-name pod-port:localhost-port

💡 Use kubectl get pods to find the exact Pod name.

This allows you to test your backend/frontend locally before exposing it externally.


  1. Next, apply your service.yml, which defines the backend service (b-svc) and frontend service (f-svc):

     kubectl apply -f service.yml
    

    This step ensures that your Pods are accessible via stable service IPs inside the cluster.


  1. Now that your services are running, expose them to the internet using an Ingress.

     kubectl apply -f ingress.yml
    

    This creates the necessary rules to route traffic to your services via the ALB Ingress Controller.


  1. If you own a domain (e.g., mygame.harbinder.tech), create a CNAME record in your DNS provider’s dashboard pointing to the ALB hostname you got in the previous step. You can get it by running the following command.

     kubectl get ingress -o wide
    

🛠️ Troubleshooting Tips

If your application isn't accessible or the ALB (Application Load Balancer) shows unhealthy targets, here's what you can do to debug the issue:


🔍 1. Check Load Balancer Health Status

  • Go to EC2 → Load Balancers in the AWS Console.

  • Find the ALB created by your Ingress.

  • Under the "Targets" tab, check the health status of your backend and frontend services.

If the targets show as unhealthy, there’s likely a problem with your health check configuration.


🧾 2. Make Sure Your Health Check Path is Correct

For the frontend, AWS ALB expects the health check path to return a 200 OK status.

If you're using NGINX to serve your frontend, make sure your nginx.conf includes a proper location block for the health check.

For example:

nginxCopyEditlocation /health {
    return 200 'OK';
    add_header Content-Type text/plain;
}

Also, double-check that your NGINX config is being correctly copied into the final image during the Docker build process.


Final Flow of the application

At last, don’t forget to delete the cluster to eliminate the risk of an unwanted bill.
Steps to delete a cluster

🏁 Conclusion

Deploying a real-time WebSocket application on AWS EKS might sound intimidating at first, but once you break it down step by step, it's totally manageable.

In this project, I:

  • Containerized both the frontend and backend with Docker

  • Built a production-ready frontend image using multi-stage builds

  • Set up an EKS cluster using eksctl

  • Deployed everything using Kubernetes manifests

  • Routed traffic securely and efficiently using an ALB Ingress Controller

This blog was my attempt to document everything I learned. If you're new to Kubernetes or AWS, I hope this helped make things a little less scary and a lot more exciting. 🙌

Feel free to connect or ask questions—I’m still learning and happy to help!

0
Subscribe to my newsletter

Read articles from Harbinder Singh directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Harbinder Singh
Harbinder Singh

Learning and exploring new tech