Deploying the Classic 2048 Game on AWS Elastic Kubernetes Service (EKS)

DeepakDeepak
7 min read

In today's cloud-native world, containerized applications have become the standard for deploying scalable and resilient services. EKS significantly simplifies Kubernetes cluster management by transforming the traditionally complex and tedious process of self-managing Kubernetes into a streamlined, managed service. AWS Elastic Kubernetes Service (EKS) combined with AWS Fargate offers a powerful serverless container orchestration platform that eliminates the need to manage infrastructure directly. In this blog post, I'll walk you through how I deployed the classic 2048 game on AWS EKS using Fargate, complete with a load balancer for accessibility.

Why EKS with Fargate?

Before diving into the technical details, let's understand why this combination makes sense:

  • Fargate is a serverless compute engine that allows you to focus on building applications without managing servers

  • Unlike traditional EC2-based Kubernetes deployments, Fargate eliminates the need to manually configure compute and memory resources

  • You don't need to build container images or isolate applications in separate VMs

  • The pay-per-use model ensures cost efficiency for applications with variable workloads

Prerequisites

To follow along with this tutorial, you'll need:

  • AWS CLI installed and configured

  • kubectl installed

  • eksctl installed

  • Helm installed (for the AWS Load Balancer Controller)

  • AWS Account IAM permissions

Install AWS CLI

Command line tool for working with AWS services

Link - https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt install unzip
unzip awscliv2.zip
sudo ./aws/install
aws --version

Install and Setup kubectl

Command line tool for working with Kubernetes clusters.

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
kubectl version --client

Install and Setup eksctl

Command line tool for working with EKS clusters.

curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp && sudo mv /tmp/eksctl /usr/local/bin
eksctl version

Install and Setup helm

Helm is a package manager for Kubernetes.

Link - https://helm.sh/docs/intro/install/

curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
sudo apt-get install apt-transport-https --yes
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Setting up your AWS IAM

  • Go to the IAM (Identity and Access Management) service in the AWS Management Console.

  • You can create a user and give permissions for the IAM user by attaching policies directly.

  • But here I am using Root User Access Keys, by going to the top right corner, then clicking your name, then selecting Security Credentials, then creating Access Key

  • Store these access keys securely, as they will be used to authenticate API requests made to AWS services.

Note: For production environments, it's recommended to use IAM roles with least privilege rather than root credentials.

Setting Up Your AWS Environment

Configuring AWS CLI

First, let's set up our AWS CLI configuration:

aws configure

You'll be prompted to enter:

  • AWS Access Key ID ~ Your Access Key

  • AWS Secret Access Key ~ Your Secret Access Key

  • Default region (e.g., us-east-1)

Creating the EKS Cluster Using Fargate

Fargate is a serverless compute engine that lets you focus on building applications without managing servers. While we can use EC2 instances to set up our EKS cluster, it requires us to manually configure and manage all the compute and memory resources, build our container image, and isolate applications in separate VMs, among other tasks..

With our credentials configured, we can now create an EKS cluster using Fargate:

eksctl create cluster --name cluster-2048-game --region us-east-1 --fargate

This command creates a new EKS cluster named "cluster-2048-game" in the us-west-2 region, configured to use Fargate for compute resources. The process typically takes 10-15 minutes, depending on the internet, as AWS provisions the necessary resources.

Configuring kubectl for EKS

Once the cluster is created, we need to configure kubectl to communicate with it:

aws eks update-kubeconfig --name cluster-2048-game --region us-east-1

Creating a Fargate Profile

Next, we need to create a Fargate profile to specify which pods should run on Fargate:

eksctl create fargateprofile \
    --cluster cluster-2048-game \
    --region us-west-1 \
    --name alb-sample-app \
    --namespace game-2048

This creates a Fargate profile named "alb-sample-app" in the "game-2048" namespace, where our application will run.

Deploying the 2048 Game

With our infrastructure set up, we can deploy the 2048 game using a pre-defined Kubernetes manifest:

kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.5.4/docs/examples/2048/2048_full.yaml

This manifest creates the necessary deployment, service, and ingress resources in the game-2048 namespace.

Let's verify that our resources were created successfully:

kubectl get pods -n game-2048
kubectl get svc -n game-2048
kubectl get ingress -n game-2048

Setting Up the AWS Load Balancer Controller

For the service, notice that we do not have an external IP, so users outside the cluster cannot access our application. Therefore, we need to create an ingress to allow users to access our 2048 game.

Kubernetes Ingress

Kubernetes Ingress is an API object that controls external access to services inside a Kubernetes cluster, usually through HTTPS/HTTP. It sets up routing rules to manage how traffic gets to different services in the cluster.

Analyzing our ingress resource above, we see that we have a (*) for the hosts, meaning anyone can access it. We have a port specified, but no address. This is because our ingress resource needs an ingress controller to be deployed. The controller reads the ingress resource, creates and configures the load balancer, and then assigns an IP address.

Configuring IAM OIDC Provider

Before setting up an ingress controller, we need to configure the IAM OIDC provider. The ingress controller (ALB controller) needs to access the Application Load Balancer. Remember, a controller is just a K8S pod that needs to connect with AWS resources. This requires the right IAM permissions, which is why we use a connector (OIDC).

First, we need to associate an IAM OIDC provider with our cluster:

eksctl utils associate-iam-oidc-provider --cluster cluster_name --approve --region region_name

Creating IAM Policy for the Load Balancer Controller

Our ALB controller pod needs access to AWS services like ALB to create the service. We can give it the necessary permissions by running the IAM policy json file from the official ALB controller documentation.

We'll download and apply the necessary IAM policy:

curl -O <https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json>
aws iam create-policy \
    --policy-name AWSLoadBalancerControllerIAMPolicy \
    --policy-document file://iam_policy.json

Creating IAM Service Account

Create IAM Role. Remember to modify the AWS account ID and cluster name.

eksctl create iamserviceaccount \\
  --cluster=<your-cluster-name> \\
  --namespace=kube-system \\
  --name=aws-load-balancer-controller \\
  --role-name AmazonEKSLoadBalancerControllerRole \\
  --attach-policy-arn=arn:aws:iam::<your-aws-account-id>:policy/AWSLoadBalancerControllerIAMPolicy \\
  --approve

Remember to replace  with your actual AWS account ID.

Installing the Load Balancer Controller with Helm

With the permissions set up, we can install the AWS Load Balancer Controller using Helm:

# Add the EKS chart repository
helm repo add eks <https://aws.github.io/eks-charts>

# Update the repository
helm repo update eks
# Install the controller
helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system \\
  --set clusterName=<your-cluste-name> \\
  --set serviceAccount.create=false \\
  --set serviceAccount.name=aws-load-balancer-controller \\
  --set region=us-east-1 \\
  --set vpcId=<your-vpc-id>

The VPC ID can be found in the Networking tab of your EKS cluster in the AWS Management Console.

Verifying the Controller Deployment

Let's verify that the controller is running correctly:

kubectl get deployment -n kube-system aws-load-balancer-controller
kubectl get pods -n kube-system -w

You should see two replicas of the controller running in the kube-system namespace.

And we can see the Load balancer has been created in the dashboard

Accessing the 2048 Game

Now that everything is set up, the AWS Load Balancer Controller should have provisioned an Application Load Balancer based on our ingress resource. We can find the address of this load balancer by checking the ingress resource:

kubectl get ingress -n game-2048

The output should include an ADDRESS field with a DNS name. Open this address in your web browser, and you should see the 2048 game running!

We have Successfully deployed the classic 2048 game on to the AWS EKS and

Conclusion

In this blog post, we've walked through the process of deploying the 2048 game on AWS EKS using Fargate. This setup demonstrates a modern, cloud-native approach to application deployment that can be applied to more complex applications in production environments. The serverless nature of Fargate means you only pay for the resources you use, making it a cost-effective solution for applications with variable workloads.

Deletion Steps

eksctl delete cluster --name demo-cluster --region us-east-1
  • It can take time sometimes to delete everything, we need to be patient here

  • For billing Purposes to not get charged, you can check the following and delete anything related to the creation of the services :

  • VPC

  • Cloud Watch - logs

  • EKS

  • ECR

  • security groups

  • IAM Access Keys

0
Subscribe to my newsletter

Read articles from Deepak directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Deepak
Deepak

Aspiring DevOps Engineer