Two-Tier Application Using Docker, Kubernetes, and HELM

Mayur PanchaleMayur Panchale
15 min read

A Beginner’s Guide to Deploying a Scalable Two-Tier Application Using Docker, Kubernetes, and Helm.

💡 Introduction

In this blog, we’ll explore an exciting DevOps journey by deploying a two-tier application using Docker, Kubernetes, and HELM on AWS EKS. To keep things simple and reproducible, we’ll set up an Ubuntu-based t2.micro EC2 instance on AWS as our working environment. While you can perform these steps on your own system, for the sake of this blog, we will use an EC2 instance.

We’ll start by creating the EC2 machine and connecting to it via SSH using a key-pair. After setting up Docker on this instance, we’ll retrieve the application source code from a GitHub repository. The app is a basic notes application that uses Flask as the frontend and MySQL as the backend for database storage. We’ll create a Docker image for the project and push it to Docker Hub.

Next, we’ll deploy this application on a Kubernetes cluster (using minikube) and package it with HELM to ensure smooth deployment on an EKS cluster. Finally, we’ll create the EKS cluster on AWS and deploy our packaged application to it, ensuring it’s scalable and robust. Let’s get started!

💡 Pre-requisites

Before jumping into the hands-on demo of the project, it’s important to ensure you have a basic understanding of a few key concepts:

  • Basic understanding of Linux: Since we will be using an Ubuntu-based EC2 instance, knowing basic Linux commands will be helpful.

  • Basic understanding of Flask and MySQL: The application we are deploying uses Flask for the frontend and MySQL for the backend, so familiarity with these technologies is essential.

  • Basic understanding of Docker and Kubernetes: We’ll be working with Docker to containerise our application and Kubernetes (using minikube and EKS) to deploy and manage it in a scalable environment.

  • DockerHub Account: We’ll be pushing our images on Docker-hub to use it at time of kubernetes deployment. You need to have a Docker-hub Account.

If you’re comfortable with these concepts, you’re all set to follow along with this deployment project!

💡 Creating an EC2 Instance and Cloning the Code

Let’s kick off the project by setting up our environment on AWS.

Creating an EC2 instance:

  • Head to the AWS EC2 Dashboard and launch a new instance. Select the Ubuntu 20.04 LTS image (or any Ubuntu version you prefer) and choose the t2.micro instance type. Name your instance something like ‘two-tier-flask-app’.

  • In the key-pair section, create a new key-pair or use an existing one. Download it securely, as this will be used to SSH into the instance.

SSH into the instance:

  • Before SSH-ing into the instance, you’ll need to update the permissions for your key-pair file:
chmod 400 <your-keypair-name>.pem
  • Then, use the following command to SSH into your EC2 instance:
ssh -i <your-keypair-name>.pem ubuntu@<your-ec2-public-ip>

Replace <your-keypair-name>.pem with the actual name of your key-pair file and <your-ec2-public-ip> with the public IP of your EC2 instance.

Update and install Docker:

  • Once you’re inside the instance, the first step is to update the system’s packages:
sudo apt update
  • Then, install Docker using the following command:
sudo apt install docker.io -y

This will install Docker on your system, but you’ll need sudo access to run Docker commands by default.

Adding Docker to the user group:

  • To avoid typing sudo every time you use Docker, add Docker to the user group with this command:
sudo usermod -aG docker $USER
  • After adding Docker to the group, restart the instance using:
sudo reboot

This will disconnect you from the instance. Wait for 2–3 minutes, then SSH back into the instance using the same sshcommand.

Verify Docker installation:

  • Once reconnected, verify the Docker installation by checking the version:
docker --version

This should show the installed Docker version, confirming it’s set up correctly.

Cloning the application code:

  • Now, we will clone the Flask and MySQL two-tier application from the GitHub repository:
git clone https://github.com/panchalemayur/two-tier-flask-app.git
  • Change into the project directory:
cd two-tier-flask-app

All the files related to the project will be available in this directory, and we’re now ready to move forward with creating the Docker images.

💡 Creating a Docker Image and Pushing it to Docker Hub:

Now that we have cloned the project and navigated into the directory, we can start building our Docker image.

Understanding the Dockerfile:
Inside our project directory, we can see the Dockerfile, which defines the steps required to package the application into a Docker image. Here’s a breakdown of the key parts:

# Use an official Python runtime as the base image
FROM python:3.9-slim

# Set the working directory in the container  
WORKDIR /app   

# Install required packages for system  
RUN apt-get update \      
&& apt-get upgrade -y \      
&& apt-get install -y gcc default-libmysqlclient-dev pkg-config \
&& rm -rf /var/lib/apt/lists/*   
# Copy the requirements file into the container  
COPY requirements.txt .   
# Install app dependencies  
RUN pip install mysqlclient  
RUN pip install --no-cache-dir -r requirements.txt   
# Copy the rest of the application code  
COPY . .   
# Specify the command to run your application  
CMD ["python", "app.py"]
  • The FROM python:3.9-slim specifies the base image to use.

  • The WORKDIR /app sets the working directory inside the container.

  • It updates the system and installs required packages like gcc, default-libmysqlclient-dev, etc.

  • The COPY requirements.txt command copies the required dependencies file, and then we install them using pip.

  • Finally, it copies the rest of the application code and runs app.py.

Building the Docker image:
With the Dockerfile in place, we will now create a Docker image named two-tier-app. Use the following command to build the image:

docker build -t panchalemayur/two-tier-flask-app:v1 .

Note: Replace panchalemayur with your own DockerHub username. The -t flag tags the image with the name two-tier-flask-app and the version v1.

Logging into DockerHub:
Before pushing the Docker image to DockerHub, we need to authenticate our Docker client. To log in to DockerHub from the terminal, use the following command:

docker login

You will be prompted to enter your DockerHub username and password. After providing valid credentials, it should print Login Succeeded.

Pushing the image to DockerHub:
Once logged in, push the newly created image to your DockerHub account:

docker push panchalemayur/two-tier-flask-app:v1

Replace panchalemayur with your DockerHub username. This command uploads the image to your DockerHub repository. Once the push is complete, you can verify the image by visiting your DockerHub account.

That’s it! Your Flask application Docker image is now live on DockerHub, ready to be deployed to Kubernetes.

💡 Testing the Application Locally on Docker:

  • Run Flask Container
    First, we’ll run the Flask application using the following command to map it to port 5000:
docker run -d -p 5000:5000 <your-dockerhub-username>/two-tier-app:v1
  • Update Inbound Security Rules
    In AWS, edit the inbound security rules of your EC2 instance and allow traffic on ports 5000 and 3306 from anywhere (IPv4) to access the app.

  • Operational Error Page
    After accessing http://<your-instance-public-ip-address>, you'll likely see an Operational Error page. This occurs because the Flask app isn't connected to a MySQL database.

  • Run MySQL Container
    To fix this, we need to run a MySQL container:

docker run -d -p 3306:3306 --name mysql -e MYSQL_ROOT_PASSWORD=root mysql:5.7
  • Create a Docker Network
    Since both containers are not in the same network, stop them and create a new Docker network for communication:
docker network create twotier

  • Recreate Flask and MySQL Containers in the Same Network
    Use the following commands to create both containers and link them via the network:
docker run -d --name mysql -v mysql-data:/var/lib/mysql -v ./message.sql:/docker-entrypoint-initdb.d/message.sql --network=twotier -e MYSQL_DATABASE=mydb -e MYSQL_USER="admin"-e MYSQL_ROOT_PASSWORD=root -p 3306:3306 mysql:5.7
docker run -d --name flaskapp --network=twotier -e MYSQL_HOST=mysql -e MYSQL_USER=admin -e MYSQL_PASSWORD=root -e MYSQL_DB=mydb -p 5000:5000 panchalemayur/two-tier-app:v1

  • Create the messages Table
    You might encounter an error like "Unknown server host 'mysql'". This happens because the Flask and MySQL containers are not on the same Docker network.

  • To fix:

  • Exec into the MySQL container:

docker exec -it <mysql-container-id> bash
  • Log in to MySQL:
mysql -u root -p
  • Enter the password as admin and select the database:
use mydb;
  • Create the messages table:
CREATE TABLE messages (
       id INT AUTO_INCREMENT PRIMARY KEY,
       message TEXT
   );
  • Test the Application
    Now, if you refresh the page, the application should be up and running successfully!

  • Create Message and you can showcase in the database:

  • While inside the Mysql container use the following command to print the messages:
SELECT * FROM messages;

💡 Testing Our Application on Minikube:

  1. Install Docker Compose
    Since our application runs fine on Docker, we now need to prepare for Kubernetes. To ensure MySQL starts properly before the Flask app, we’ll use Docker Compose to manage the containers together.

  2. First, install Docker Compose on your EC2 instance or local machine using:

sudo apt install docker-compose -y
  • Docker Compose File Explanation
    Here’s the docker-compose.yaml file we'll use to manage our containers:
version: '3'
 services:   
  backend:
     build:
       context: .
     ports:
       - "5000:5000"
     environment:
       MYSQL_HOST: mysql
       MYSQL_USER: admin
       MYSQL_PASSWORD: admin
       MYSQL_DB: myDb
     depends_on:
       - mysql   
     mysql:
       image: mysql:5.7
     ports:
       - "3306:3306"
     environment:
       MYSQL_ROOT_PASSWORD: root
       MYSQL_DATABASE: myDb
       MYSQL_USER: admin
       MYSQL_PASSWORD: admin
     volumes:
       - ./message.sql:/docker-entrypoint-initdb.d/message.sql  # Mount SQL script
       - mysql-data:/var/lib/mysql  # Persistent MySQL data storage volumes:
   mysql-data:

Explanation:

  • Flask (backend): The Flask app is configured to start after the MySQL container is running using the depends_onkeyword. It connects to MySQL via environment variables.

  • MySQL: The MySQL container uses an official image and initializes the database with the SQL script provided. Volumes are mounted to persist MySQL data and ensure table creation.

Stop the Running Containers:
Before proceeding, stop the running MySQL and Flask containers using:

docker kill mysql flaskapp
  • Run Docker Compose
    Now, start both containers using Docker Compose:
docker-compose up

This ensures MySQL will be initialized before the Flask app starts, avoiding any “no data” errors.

Install Minikube:
To set up Kubernetes on your local machine, we will use Minikube. To install it:

curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64  
sudo install minikube-linux-amd64 /usr/local/bin/minikube && rm minikube-linux-amd64
  • Start Minikube
    After installation, create a Minikube cluster using the following command. Since Minikube consumes a lot of resources, I recommend running it on a system with higher specifications (e.g., your macOS):
minikube start

  • Deploy the Application on Minikube
    After Minikube is fully configured, navigate to the eks-manifests directory in your project and apply the Kubernetes manifests:
cd eks-manifests
kubectl apply -f two-tier-deployment.yaml  
kubectl apply -f two-tier-service.yaml  
kubectl apply -f mysql-configmap.yaml  
kubectl apply -f mysql-secret.yaml

Wait 2–3 minutes for the pods to be created.

Get Pod IP Address:
Once the pods are created, get the IP address of your running pod:

kubectl get pods -o wide

  • Access the Application Inside Minikube
    To test the Flask application running inside Minikube, SSH into the Minikube environment:
minikube ssh
  • Then run:
curl http://<Pod-Ip-address>:5000

You should see the Flask app running successfully within your Minikube cluster!

💡 Using Helm Charts to Package the Application:

  • Install Helm First, we need to install Helm, the package manager for Kubernetes. Run the following commands to install it:
curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null
 sudo apt-get install apt-transport-https --yes
 echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
 sudo apt-get update
 sudo apt-get install helm
  • Create Helm Chart for MySQL After Helm is installed, we will create a Helm chart for MySQL:
helm create mysql-chart

This command will create a directory named mysql-chart with a default configuration (usually for Nginx). We'll modify this to suit our MySQL service by doing the following:

  • In values.yaml, change the repository from nginx to mysql:
image:
    repository: mysql
    tag: latest
  • Change the port number to 3306 under the service block.

  • Remove liveness and readiness probes from the templates/deployment.yamlfile.

  • Add environment variables to the deployment’s container configuration by updating the env block in templates/deployment.yaml:

env:
    - name: MYSQL_ROOT_PASSWORD
      value: {{ .Values.env.mysqlrootpw }}
    - name: MYSQL_DATABASE
      value: {{ .Values.env.mysqldb }}
    - name: MYSQL_USER
      value: {{ .Values.env.mysqluser }}
    - name: MYSQL_PASSWORD
      value: {{ .Values.env.mysqlpass }}

Add these corresponding environment variable values to the values.yaml file:

env:
    mysqlrootpw: admin
    mysqldb: mydb
    mysqluser: admin
    mysqlpass: admin

Best Practice: Instead of hardcoding environment variables in the template files, we place them in the values.yaml file. This way, you can easily modify them without touching the deployment configuration.

  • Package and Install the MySQL Chart After making the necessary changes, package the MySQL chart:
helm package mysql-chart

This will create a .tgz file. Now, install the chart into your Minikube cluster:

helm install mysql-chart ./mysql-chart

Use the following command to check the status of the MySQL deployment:

kubectl get all

This will show the MySQL pod, service, and deployment created by the Helm chart.

  • Create Helm Chart for Flask App Next, we’ll create a Helm chart for the Flask app:
helm create flask-app-chart
  • Modify the values.yaml file as follows:

  • Set the Docker repository to the Flask app image:

image:
 repository: panchalemayur/flaskapp  # Replace with your DockerHub username     
 tag: v1
  • Change the service type to NodePort and update the port settings:
service:     
  type: NodePort     
  port: 80
  targetPort: 5000     
  nodePort: 30007

  • Add environment variables under the image tag in values.yaml:
env:
  mysqlhost: "10.110.233.87"  # ClusterIP of MySQL service     
  mysqlpw: "root"     
  mysqluser: "admin"     
  mysqldb: "mydb"

  • In templates/deployment.yaml, add the environment variables to the container configuration:
env:
  - name: MYSQL_HOST       
    value: {{ .Values.env.mysqlhost }}     
  - name: MYSQL_PASSWORD       
    value: {{ .Values.env.mysqlpw }}     
  - name: MYSQL_USER       
    value: {{ .Values.env.mysqluser }}     
  - name: MYSQL_DB       
    value: {{ .Values.env.mysqldb }}

  • Update service.yaml to use the NodePort:
ports:
  - port: {{ .Values.service.port }}       
    targetPort: {{ .Values.service.targetPort }}       
    nodePort: {{ .Values.service.nodePort }}

  1. Template and Package the Flask Chart Check the configuration of your chart using the helm template command:
helm template flask-app-chart
  • Once you’re satisfied with the configuration, package the chart:
helm package flask-app-chart
  • Then install the Flask chart:
helm install flask-app-chart ./flask-app-chart

  • Run the following command to verify the deployment:
kubectl get all

  • You should see the Flask app pods, service, and deployment running.

  • Access the Application and Create a Table To access the Flask app, first SSH into the Minikube cluster:

minikube ssh
  • Get the container ID of the MySQL container using docker ps, then exec into it:
docker exec -it <container-id> bash
  • Inside the MySQL container, log in to MySQL:
mysql -u admin -p
  • Use the following commands to create the messages table:
USE mydb;
CREATE TABLE messages (
      id INT AUTO_INCREMENT PRIMARY KEY,
      message TEXT  );
  • Exit the MySQL container, and in Minikube, use the following command to curl the Flask app:
curl http://<flask-app-node-ip>:5000
  • You should see the HTML response from the Flask app confirming it’s connected to the MySQL database.

  • Uninstall Helm Charts To clean up, uninstall the Helm charts with the following command:

helm uninstall mysql-chart flask-app-chart

💡EKS Deployment of the Application

To deploy the application on Amazon EKS, we first need to install the AWS CLI on our system. Follow these steps to get started:

  • Install AWS CLI:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"  
sudo apt install unzip -y
unzip awscliv2.zip
sudo ./aws/install
  • Configure AWS CLI Credentials:

  • Once AWS CLI is installed, configure your AWS credentials:

aws configure
  • You will be prompted to input your AWS Access Key and Secret Access Key. If you don’t have these credentials, you need to create them under IAM:

  • In the AWS Management Console, navigate to IAM and create a user with Administrator access.

  • Under Security credentials, create an Access Key and Secret Access Key for this user.

  • During the aws configure command, paste these keys and set the Default region to us-east-1 and the Default output format to none.

  • Install kubectl and eksctl:

  • Next, install kubectl (Kubernetes command-line tool) and eksctl (EKS management tool) with the following commands:

  • kubectl:

curl -o kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.19.6/2021-01-05/bin/linux/amd64/kubectl   
chmod +x ./kubectl   
sudo mv ./kubectl /usr/local/bin   
kubectl version --short --client
  • eksctl:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin   
eksctl version

  1. Create an EKS Cluster:

  2. To create an EKS cluster, use the following command:

eksctl create cluster --name mycluster --region us-east-1 --node-type t3.small --nodes-min 2 --nodes-max 3

This process will take around 15–20 minutes. Once complete, update the kubeconfig to allow kubectl to interact with your EKS cluster:

aws eks update-kubeconfig --region us-east-1 --name mycluster
  • Deploy the Application:

After setting up the cluster, apply the YAML manifests for the MySQL and two-tier app deployments:

kubectl apply -f mysql-secrets.yml -f mysql-configmap.yml -f mysql-deployment.yml -f mysql-svc.yml  kubectl apply -f two-tier-app-deployment.yml -f two-tier-app-svc.yml
  • Access the Application:

To access your deployed application, retrieve the external IP address of the service:

kubectl get svc

  1. Find the External IP of two-tier-app-service and paste it into your browser. The application should now be up and running.

  • Clean Up Resources:

Once you’ve finished the project, make sure to delete the EKS cluster to avoid additional AWS charges:

eksctl delete cluster --name mycluster --region us-east-1

💡Conclusion

In this comprehensive guide, we walked through deploying a two-tier application on Amazon EKS using Kubernetes and Helm. We started by configuring AWS CLI and setting up kubectl and eksctl to manage the EKS cluster. With the cluster ready, we packaged our MySQL and Flask applications using Helm charts and deployed them seamlessly into the Kubernetes environment.

This hands-on project illustrated the process of creating an EKS cluster, deploying Docker images of our services, and managing configurations through Kubernetes manifests. By employing Helm, we simplified the deployment process and ensured the application’s configuration was adaptable and scalable for future updates.

Finally, we highlighted the importance of responsibly managing cloud resources by deleting the EKS cluster post-deployment to avoid unnecessary costs. This blog showcased a full-cycle deployment process, from setting up AWS resources to deploying a production-ready application on a managed Kubernetes cluster.

Through this project, you’ve gained valuable insight into cloud-native deployments with AWS, Kubernetes, Docker, and Helm Skills

#Docker #FlaskApp #DevOps #DockerBuild #Containerization #TwoTierArchitecture #Python #MySQL #DockerScout #ImageSecurity #Microservices #FullStackDev #SoftwareEngineering

0
Subscribe to my newsletter

Read articles from Mayur Panchale directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mayur Panchale
Mayur Panchale

"DevOps Engineer Passionate About Cloud, Automation, and Continuous Delivery | AWS | Jenkins | Docker | Kubernetes | CI/CD | Let's Build Reliable, Scalable Systems Together!"