Email Subject Generator - Multi-Tier Application Deployment


This project is a Python-based Email Subject Generator Application developed using the Flask framework. It follows a three-tier architecture, consisting of a front-end, back-end API, and database, each running as separate services to ensure modularity and scalability.
The tool enables users to input any scenario or context that requires an email. Based on the provided information, the application intelligently generates a suitable and impactful subject line for the email, enhancing communication clarity and professionalism.
To streamline development, testing, and deployment, we implemented separate CI and CD pipelines using Jenkins. The source code is hosted on Bit-bucket, and every push to the repository triggers an automated CI/CD workflow that:
Runs code quality tests using SonarQube to ensure clean, maintainable, and secure code.
Builds and tests the Flask application using Jenkins pipelines.
Packages the app into a Docker image for consistent deployment across environments.
Pushes the generated Docker image to Docker Hub.
Optionally deploys the application to a target server or cloud platform via the CD pipeline.
Bit-bucket Repository Link: https://sachindumalshan@bitbucket.org/sachindu-work-space/email-subject-generator.git
β Step 1: Prerequisites and Environment Setup
Do the following on the server:
Install Docker
# Install docker original
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
# Verify Docker installation:
docker run hello-world
Install Docker Compose
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# Set the permission to execute Docker Compose:
sudo chmod +x /usr/local/bin/docker-compose
# Check Docker Compose version:
docker-compose --version
Native Jenkins Installation
# Update package lists and upgrade existing packages
sudo apt update && sudo apt upgrade -y
# Install OpenJDK 17 (required for Jenkins)
sudo apt install openjdk-17-jdk -y
# Verify Java installation
java -version
Download and add the Jenkins GPG key. This command adds Jenkinsβ official GPG key to your systemβs keyring so that APT trusts Jenkins packages.
# Download and add the Jenkins GPG key to the system keyrings
curl -fsSL https://pkg.jenkins.io/debian-stable/jenkins.io-2023.key | sudo tee /usr/share/keyrings/jenkins-keyring.asc > /dev/null
Update package index and install Jenkins
# Update package index after adding Jenkins repository key
sudo apt update
# Install Jenkins
sudo apt install jenkins -y
# Check if port 8080 is already in use (Jenkins default port)
sudo netstat -tlnp | grep :8080
# Check current firewall status
sudo ufw status
# Allow incoming connections on Jenkins port (8080)
sudo ufw allow 8080
# Start Jenkins service
sudo systemctl start jenkins
# Enable Jenkins to start on boot
sudo systemctl enable jenkins
Access Jenkins from Another Computer
- Open the URL below in your browser. It will display the Jenkins startup interface and prompt you to enter the administrator password. Use the following command to retrieve the password, then enter it in the prompt to log into Jenkins.
# Use 'ifconfig' to get the server IP address
# Ex: http://192.168.8.129:8080/
http://<server-ip>:8080
π‘ To get the Jenkins initial admin password, enter the following command on the Jenkins server:
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
If you skip user configuration during setup:
Username: admin
Password: Use the output from above command (e.g. a5bba1b0c60d4edaadf420882fb12060)
Add Jenkins User to Docker Group and Verify Access
# Add the Jenkins user to the Docker group to allow it to run Docker commands
sudo usermod -aG docker jenkins
# Restart Jenkins to apply the new group permissions
sudo systemctl restart jenkins
# Verify that the Jenkins user can run Docker commands without permission errors
sudo -u jenkins docker ps
β Step 2: Prepare and Develop the Application
Project folder structure is here.
email-subject-generator/
βββ docker-compose.yml
βββ .env
βββ .gitignore
βββ README.md
βββ sonar-project.properties
β
βββ frontend/
β βββ Dockerfile
β βββ requirements.txt
β βββ app.py
β βββ test_app.py
β βββ templates/
β β βββ index.html
β βββ static/
β βββ style.css
β
βββ backend/
β βββ Dockerfile
β βββ requirements.txt
β βββ app.py
β βββ test_app.py
β
βββ database/
βββ init.sql
π 1. Create and Test the Frontend
Navigate to the frontend/ folder containing:
Dockerfile
requirements.txt
app.py
test_app.py
templates/index.html
static/style.css
All the coding files are available in the repository.
Install all dependencies:
pip install -r requirements.txt
To test locally, run:
python3 app.py
βοΈ When running successfully, it will be available at:
# Paste the URL in the broswer
http://0.0.0.0:5000
π 2. Create the .env File
email-subject-generator/
βββ .env
The .env
file is used to store environment-specific configuration details such as database credentials, API keys, ports, and other sensitive information required by the application.
Why use a
.env
file?
It helps keep configuration separate from code, allowing easy changes without modifying source files. This approach improves security and flexibility across different environments (development, testing, production).What is usually stored in a
.env
file?Database host, user, password, and database name
Secret keys and tokens (e.g., API keys)
Application-specific settings like ports or debug flags
Why is the
.env
file usually excluded from remote repositories?
Because it contains sensitive data that should not be publicly exposed or shared, it is common practice to add.env
to.gitignore
. This prevents accidental leaks of credentials or secrets and keeps your application secure.
Note:
For learning purposes, a sample.env
file has been included in the repository to demonstrate its structure and contents.
In real projects, avoid committing your actual.env
files with sensitive data to public repositories.
π 3. Create and Test the Backend and Database
Navigate to the backend/ and database/ folders:
backend/
Dockerfile
requirements.txt
app.py
test_app.py
database/
- init.sql
All the coding files are available in the repository.
Install all dependencies:
pip install -r requirements.txt
To test locally, run:
python3 app.py
The backend will run on: http://0.0.0.0:5001
β Check if the server is listening
Run:
netstat -tuln | grep 5001
βοΈ If you see output showing your Python/Flask process listening on port 5001, it confirms the server is running.
β
Use curl
to test endpoints
Run:
curl http://localhost:5001/api/health
βοΈ You will see output similar to:
{
"service": "email-subject-generator-backend",
"status": "healthy",
"timestamp": "2025-07-12T12:11:46.423913"
}
βοΈ This confirms the API is working and returning the expected response.
β Step 3: Create Docker files and Docker Compose Configuration
π‘ Note:
As per industry best practices, a Dockerfile is not needed for the database. Database containers can be configured directly using Docker Compose.
1. Dockerfile β Frontend
# Use the official Python 3.10.12 image with Alpine Linux (lightweight)
FROM python:3.10.12-alpine
# Set the working directory inside the container to /app
WORKDIR /app
# Copy only requirements.txt first to leverage Docker cache for faster builds when dependencies don't change
COPY requirements.txt .
# Install Python dependencies specified in requirements.txt
RUN pip install -r requirements.txt
# Copy the entire application code into the container's /app directory
COPY . .
# Expose port 5000 so it can be accessed externally if mapped
EXPOSE 5000
# Specify the default command to run the Flask app when the container starts
CMD ["python3", "app.py"]
Explanation: The
EXPOSE
command is used as good practice to document the port the container listens on.If the build fails to pull the base image, login to Docker Hub:
docker login
βοΈ Run the frontend container:
sudo docker run -d -p 5000:5000 --name email-gen-frontend emailgen-app:v1
2. Dockerfile β Backend
# Use the official Python 3.10.12 image with Alpine Linux (lightweight)
FROM python:3.10.12-alpine
# Set the working directory inside the container to /app
WORKDIR /app
# Copy only requirements.txt first to leverage Docker cache for faster builds when dependencies don't change
COPY requirements.txt .
# Install Python dependencies specified in requirements.txt
RUN pip install -r requirements.txt
# Copy the entire application code into the container's /app directory
COPY . .
# Expose port 5001 so it can be accessed externally if mapped
EXPOSE 5001
# Specify the default command to run the Flask app when the container starts
CMD ["python3", "app.py"]
βοΈ Run the backend container:
sudo docker run -d -p 5001:5001 --name emailgen-be emailgen-backend:v1
Check running containers:
docker ps
# Or view all containers including stopped ones:
docker ps -a
# Example output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3979a0bdfe82 emailgen-backend:v1 "python3 app.py" 6 seconds ago Up 6 seconds 0.0.0.0:5001->5001/tcp, :::5001->5001/tcp emailgen-be
a771f531d2f9 emailgen-app:v1 "python3 app.py" 14 minutes ago Up 14 minutes 0.0.0.0:5000->5000/tcp, :::5000->5000/tcp email-gen-frontend
3. Create and Run docker-compose.yml
version: '3.9'
services:
frontend:
# Use build for development, or set FRONTEND_IMAGE and IMAGE_TAG env vars for deployment
build: ./frontend
# image: ${FRONTEND_IMAGE:-emailgen-frontend}:${IMAGE_TAG:-latest}
container_name: emailgen-frontend
ports:
- "5000:5000"
depends_on:
- backend
environment:
- BACKEND_URL=http://backend:5001
networks:
- emailgen-net
restart: unless-stopped
backend:
# Use build for development, or set BACKEND_IMAGE and IMAGE_TAG env vars for deployment
build: ./backend
# image: ${BACKEND_IMAGE:-emailgen-backend}:${IMAGE_TAG:-latest}
container_name: emailgen-backend
ports:
- "5001:5001"
depends_on:
- db
environment:
- MYSQL_HOST=db
- MYSQL_PORT=3306
- MYSQL_DATABASE=email_generator
- MYSQL_USER=root
- MYSQL_PASSWORD=rootpassword123
- GROQ_API_KEY=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
networks:
- emailgen-net
restart: unless-stopped
db:
image: mysql:8.0
container_name: emailgen-db
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=rootpassword123
- MYSQL_DATABASE=email_generator
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- mysql_data:/var/lib/mysql
ports:
- "3306:3306"
networks:
- emailgen-net
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-p=rootpassword123"]
interval: 10s
timeout: 5s
retries: 5
volumes:
mysql_data:
networks:
emailgen-net:
driver: bridge
After creating your docker-compose.yml
file, you can run all services together with a single command.
Build and Run in detached mode:
To run in the background (detached mode), use:
docker-compose up -d --build
Check running containers
docker ps
You should see output similar to:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abcd1234efgh emailgen-backend "python3 app.py" 5 seconds ago Up 5 seconds 0.0.0.0:5001->5001/tcp emailgen-backend
ijkl5678mnop emailgen-frontend "python3 app.py" 5 seconds ago Up 5 seconds 0.0.0.0:5000->5000/tcp emailgen-frontend
qrst9012uvwx mysql:8.0 "docker-entrypoiβ¦" 5 seconds ago Up 5 seconds 33060/tcp, 0.0.0.0:3306->3306/tcp emailgen-db
Here you can see frontend, backend, and database containers running with their respective ports.
Test your application
Open your browser and navigate to:
Frontend: http://localhost:5000
Backend API: http://localhost:5001
Confirm that your frontend connects with the backend and data saves/retrieves from the database successfully.
When done testing:
docker-compose down
This stops and removes containers, networks, and default volumes created by docker-compose up
.
β Step 4: Push Code to Bit bucket Repository
# Initialize Git in your local project folder
git init
# Create .gitignore file to exclude node_modules and .env for security and clean repo
echo "node_modules/" >> .gitignore
echo ".env" >> .gitignore
# Track all changes and commit
git add .
git commit -m "Initial commit"
# Connect local repo to GitHub remote (initial setup)
git remote add origin https://sachindumalshan@bitbucket.org/sachindu-work-space/email-subject-generator.git
# Push code to main branch on GitHub
git branch -M main
git push -u origin main
β Bit bucket Push Error: Instead of Bit bucket password, use Access Token.
Create one from:
https://bitbucket.org/account/settings/app-passwords/ β Generate classic token with full access.
β Step 5: Set Up Jenkins
Go to: Manage Jenkins > Manage Plugins
, Search and install the plugins as needed. Install essential plugins such as:
Pipeline
Git plugin
Bitbucket Branch Source plugin
SonarQube Scanner plugin
Docker Pipeline plugin
Parameterized Trigger plugin
Pipeline Stage View Plugin
Ensure these plugins are installed and up-to-date before running the pipelines.
Configure Build Tools
1. JDK configure
# Name
JDK-17
# Set the JDK path as:
/usr/lib/jvm/java-17-openjdk-amd64
# check java jdk installation path
sudo update-alternatives --config java
If not found, Update and install OpenJDK 17:
sudo apt update
sudo apt install openjdk-17-jdk -y
2. Git configure
# Name
Default
# Set the Git path as:
/usr/bin/git
# To check git path
which git
3. Docker configure
# Name
Docker
# Set the JDK path as:
/usr/bin/docker
Install and Configure SonarQube (Code Quality Analysis)
To run SonarQube analysis locally, you'll need to install both:
SonarQube Server: The main server that processes and stores analysis results
SonarQube Scanner: The client tool that analyzes your code and sends results to the server
Below is the installation guide for each.
Why use SonarQube?
Ensures code quality, security, and maintainability by automatic static code analysis
Detects bugs, vulnerabilities, code smells, and duplications early in the CI/CD pipeline
Provides a dashboard for project code health with historical trends and actionable insights
1. Install SonarQube Server
# 1. Update system packages
sudo apt update
# 2. Install Java 17 (required for SonarQube)
sudo apt install -y openjdk-17-jdk
# 3. Verify Java installation
java -version
# 4. Create SonarQube user (recommended for security)
sudo useradd -r -s /bin/false sonarqube
# 5. Download SonarQube Community Edition
cd /opt
sudo wget https://binaries.sonarsource.com/Distribution/sonarqube/sonarqube-10.4.1.88267.zip
# 6. Extract SonarQube
sudo unzip sonarqube-10.4.1.88267.zip
sudo mv sonarqube-10.4.1.88267 sonarqube
sudo chown -R sonarqube:sonarqube /opt/sonarqube
# 7. Configure SonarQube (optional - edit if needed)
sudo nano /opt/sonarqube/conf/sonar.properties
# 8. Create systemd service file
sudo tee /etc/systemd/system/sonarqube.service > /dev/null <<EOF
[Unit]
Description=SonarQube service
After=syslog.target network.target
[Service]
Type=forking
ExecStart=/opt/sonarqube/bin/linux-x86-64/sonar.sh start
ExecStop=/opt/sonarqube/bin/linux-x86-64/sonar.sh stop
User=sonarqube
Group=sonarqube
Restart=always
LimitNOFILE=65536
LimitNPROC=4096
[Install]
WantedBy=multi-user.target
EOF
# 9. Reload systemd and enable SonarQube service
sudo systemctl daemon-reload
sudo systemctl enable sonarqube
sudo systemctl start sonarqube
# 10. Check SonarQube service status
sudo systemctl status sonarqube
# 11. Check if SonarQube is running (may take a few minutes to start)
echo "SonarQube is starting... Please wait 5-10 minutes."
echo "Access SonarQube at: http://localhost:9000"
echo "Default credentials: admin / admin"
2. Check SonarQube Status
# Check SonarQube service status
sudo systemctl status sonarqube
# Check running processes
ps -ef | grep sonarqube
# Check logs for startup or error details
sudo tail -f /opt/sonarqube/logs/sonar.log
3. Troubleshooting Common SonarQube Issues
If SonarQube is not working, run the script below to diagnose and resolve:
#!/bin/bash
# ===========================================
# Common SonarQube Issues and Fixes
# ===========================================
echo "=== Fix 1: If SonarQube is taking too long to start ==="
echo "Wait 5-10 minutes for first startup"
echo "Monitor logs: sudo tail -f /opt/sonarqube/logs/sonar.log"
echo "=== Fix 2: If there are memory issues ==="
free -h
sudo nano /opt/sonarqube/conf/sonar.properties
cat << 'EOF'
# Reduce memory usage
sonar.web.javaOpts=-Xms512m -Xmx1g
sonar.ce.javaOpts=-Xms512m -Xmx1g
sonar.search.javaOpts=-Xms512m -Xmx1g
EOF
echo "=== Fix 3: If Elasticsearch won't start ==="
sysctl vm.max_map_count
sudo sysctl -w vm.max_map_count=262144
echo 'vm.max_map_count=262144' | sudo tee -a /etc/sysctl.conf
echo "=== Fix 4: If there are permission issues ==="
sudo chown -R sonarqube:sonarqube /opt/sonarqube
id sonarqube
echo "=== Fix 5: If port 9000 is in use ==="
sudo lsof -i :9000
sudo nano /opt/sonarqube/conf/sonar.properties
# Change sonar.web.port=9001 if needed
echo "=== Fix 6: Restart SonarQube service ==="
sudo systemctl restart sonarqube
sudo systemctl status sonarqube
echo "=== Fix 7: Manual start if still not working ==="
sudo systemctl stop sonarqube
sudo -u sonarqube /opt/sonarqube/bin/linux-x86-64/sonar.sh start
sudo -u sonarqube /opt/sonarqube/bin/linux-x86-64/sonar.sh console
4. Install SonarQube Scanner
The scanner analyzes your code and sends results to the SonarQube server.
#!/bin/bash
# ===========================================
# Install SonarQube Scanner on Linux
# ===========================================
# 1. Download SonarQube Scanner
cd /opt
sudo wget https://binaries.sonarsource.com/Distribution/sonar-scanner-cli/sonar-scanner-cli-5.0.1.3006-linux.zip
# 2. Extract Scanner
sudo unzip sonar-scanner-cli-5.0.1.3006-linux.zip
sudo mv sonar-scanner-5.0.1.3006-linux sonar-scanner
sudo chown -R $(whoami):$(whoami) /opt/sonar-scanner
# 3. Add Scanner to PATH
echo 'export PATH=$PATH:/opt/sonar-scanner/bin' >> ~/.bashrc
source ~/.bashrc
# 4. Verify installation
sonar-scanner --version
# 5. Configure Scanner (optional)
sudo nano /opt/sonar-scanner/conf/sonar-scanner.properties
# Enter the URL in the browser and setup the SonarQube using dashboard
http://localhost:9000
# Username: admin
# Password: admin
π Logging into SonarQube Web Dashboard
Open your browser and go to:
http://localhost:9000
You will be prompted for login credentials.
Default username:
admin
Default password:
admin
Important: After first login, immediately change the password to something secure.
Next, you will be prompted to create a new project:
Enter a project name (e.g.,
Email Subject Generator
)Enter a project key (a unique identifier, e.g.,
email-subject-gen
)
Choose whether the project will be connected as a local or remote project depending on your setup.
After configuration, you will be taken to the SonarQube project dashboard, where you can view metrics, code quality reports, and other insights.
Configure System
To integrate SonarQube scanning within your Jenkins pipeline, follow these steps to properly add SonarQube server details in Jenkins:
Open Jenkins dashboard and navigate to:
Manage Jenkins β Configure SystemScroll down to the SonarQube servers section.
Click Add SonarQube.
Enter the following details:
Name:
SonarQube
(or any recognizable name)Server URL:
http://<your-sonarqube-server-ip>:9000
(e.g.,http://localhost:9000
)Server authentication token:
Generate a token from SonarQube user profile β My Account β Security β Generate Tokens
Paste the generated token here to authenticate Jenkins with SonarQube.
Save the configuration.
Now, Jenkins can communicate with SonarQube to run code analysis as part of your CI pipeline.
Manage Credentials
Jenkins needs credentials to securely access services like Bitbucket, Docker Hub, and SonarQube. You add these credentials in Jenkins under Manage Jenkins > Manage Credentials.
Bitbucket: Use your username and app password or token to allow Jenkins to clone and push code.
Docker Hub: Provide your Docker username and access token so Jenkins can build and push images.
SonarQube: Use a secret token from your SonarQube account for Jenkins to run code analysis.
Always store credentials securely in Jenkins and avoid hardcoding them in your code or pipeline scripts.
β Step 6: Setup CI Pipeline
Prerequisites Checklist
Jenkins is installed and running
Bitbucket repository is accessible and configured with Jenkins credentials
Docker is installed and Jenkins user added to docker group
SonarQube server is running and integrated with Jenkins (SonarQube plugin configured)
Python and required build tools installed on Jenkins node (optional if you use Docker build)
Script for CI pipeline
pipeline {
agent any
// Add trigger configuration
triggers {
// Poll SCM every 2 minutes as fallback
pollSCM('H/2 * * * *')
// Bitbucket webhook trigger
bitbucketPush()
}
environment {
VENV = 'venv'
DOCKER_HUB_USER = 'YOUR-DOCKER-HUB-USERNAME'
IMAGE_TAG = "${BUILD_NUMBER}"
FRONTEND_IMAGE = "${DOCKER_HUB_USER}/email-subject-generator-frontend"
BACKEND_IMAGE = "${DOCKER_HUB_USER}/email-subject-generator-backend"
DOCKER_HUB_CREDENTIALS = 'docker-hub-credentials'
// SonarQube configuration
SONAR_HOST_URL = 'http://localhost:9000'
SONAR_PROJECT_KEY = 'email-subject-generator'
SONAR_PROJECT_NAME = 'Email Subject Generator'
}
stages {
stage('Clone Repository') {
steps {
script {
// Clone the repository
checkout([
$class: 'GitSCM',
branches: [[name: '*/main']],
userRemoteConfigs: [[
url: 'https://sachindumalshan@bitbucket.org/sachindu-work-space/email-subject-generator.git',
credentialsId: 'bitbucket-creds'
]],
extensions: [
[$class: 'CleanBeforeCheckout'],
[$class: 'PruneStaleBranch']
]
])
// Set git commit hash after cloning
env.GIT_COMMIT_SHORT = sh(script: "git rev-parse --short HEAD", returnStdout: true).trim()
env.GIT_COMMIT_MESSAGE = sh(script: "git log -1 --pretty=%B", returnStdout: true).trim()
env.GIT_AUTHOR = sh(script: "git log -1 --pretty=%an", returnStdout: true).trim()
echo "Git commit: ${env.GIT_COMMIT_SHORT}"
echo "Commit message: ${env.GIT_COMMIT_MESSAGE}"
echo "Author: ${env.GIT_AUTHOR}"
}
}
}
stage('Set up Python Environment') {
steps {
sh '''
# Clean up any existing venv
rm -rf "$VENV"
# Install python3-venv if not available
if ! python3 -m venv --help > /dev/null 2>&1; then
echo "Installing python3-venv package..."
sudo apt update
sudo apt install -y python3-venv python3-pip
fi
# Create fresh virtual environment
python3 -m venv "$VENV"
. "$VENV/bin/activate"
# Upgrade pip
pip install --upgrade pip
# Install dependencies with error handling
if [ -f backend/requirements.txt ]; then
echo "Installing backend dependencies..."
pip install -r backend/requirements.txt
else
echo "backend/requirements.txt not found"
exit 1
fi
# Check if frontend has requirements.txt (optional)
if [ -f frontend/requirements.txt ]; then
echo "Installing frontend dependencies..."
pip install -r frontend/requirements.txt
else
echo "frontend/requirements.txt not found, skipping frontend Python dependencies"
fi
# Install coverage for Python code coverage
pip install coverage pytest pytest-cov
# Verify installation
pip list
'''
}
}
stage('Run Tests with Coverage') {
steps {
script {
try {
sh '''
. "$VENV/bin/activate"
# Set timeout for tests
timeout 300 bash -c '
# Create coverage reports directory
mkdir -p coverage-reports
# Run frontend tests with coverage
if [ -f frontend/app.py ]; then
echo "Running frontend tests with coverage..."
cd frontend
coverage run --source=. -m pytest --junitxml=../coverage-reports/frontend-junit.xml . || python app.py --test
coverage xml -o ../coverage-reports/frontend-coverage.xml
coverage report
cd ..
else
echo "Frontend app.py not found, skipping frontend tests"
fi
# Run backend tests with coverage
if [ -f backend/app.py ]; then
echo "Running backend tests with coverage..."
cd backend
coverage run --source=. -m pytest --junitxml=../coverage-reports/backend-junit.xml . || python app.py --test
coverage xml -o ../coverage-reports/backend-coverage.xml
coverage report
cd ..
else
echo "Backend app.py not found, skipping backend tests"
fi
'
'''
} catch (Exception e) {
echo "Tests failed: ${e.getMessage()}"
currentBuild.result = 'UNSTABLE'
}
}
}
}
stage('SonarQube Analysis') {
steps {
script {
try {
if (!fileExists('sonar-project.properties')) {
echo "sonar-project.properties file not found. Creating default configuration..."
// Create sonar-project.properties file
writeFile file: 'sonar-project.properties', text: """sonar.projectKey=${env.SONAR_PROJECT_KEY}
sonar.projectName=${env.SONAR_PROJECT_NAME}
sonar.projectVersion=${env.BUILD_NUMBER}
sonar.sources=frontend,backend
sonar.sourceEncoding=UTF-8
sonar.python.coverage.reportPaths=coverage-reports/frontend-coverage.xml,coverage-reports/backend-coverage.xml
sonar.python.xunit.reportPath=coverage-reports/frontend-junit.xml,coverage-reports/backend-junit.xml
sonar.exclusions=**/*.pyc,**/__pycache__/**,**/venv/**,**/node_modules/**,**/*.log,**/database/**,**/static/**,**/templates/**
sonar.tests=frontend,backend
sonar.test.inclusions=**/test_*.py,**/*test*.py
sonar.scm.provider=git
sonar.scm.revision=${env.GIT_COMMIT_SHORT}"""
}else{
echo "sonar-project.properties file found. Using existing configuration..."
}
// Run SonarQube analysis
withSonarQubeEnv('SonarQube') {
sh '''
echo "Running SonarQube analysis..."
echo "Project Key: $SONAR_PROJECT_KEY"
echo "SonarQube Host: $SONAR_HOST_URL"
# Use sonar-scanner with project file
sonar-scanner
'''
}
} catch (Exception e) {
echo "SonarQube analysis failed: ${e.getMessage()}"
currentBuild.result = 'UNSTABLE'
}
}
}
}
stage('Lightweight Code Quality') {
steps {
script {
try {
sh '''
. "$VENV/bin/activate"
echo "Running lightweight code quality checks..."
# Quick Python syntax check
echo "Checking Python syntax..."
python -m py_compile backend/app.py || echo "Backend syntax issues found"
python -m py_compile frontend/app.py || echo "Frontend syntax issues found"
# Quick linting with basic rules
echo "Running basic linting..."
pip install flake8 || true
flake8 --select=E9,F63,F7,F82 backend/ frontend/ || echo "Basic linting issues found"
# Check for common security issues
echo "Basic security check..."
grep -r "password.*=" backend/ frontend/ || echo "No hardcoded passwords found"
echo "Lightweight quality checks completed!"
'''
} catch (Exception e) {
echo "Lightweight quality checks failed: ${e.getMessage()}"
echo "Continuing anyway..."
}
// Always continue - never fail the pipeline
echo "SonarQube analysis running in background at: ${env.SONAR_HOST_URL}/dashboard?id=${env.SONAR_PROJECT_KEY}"
}
}
}
stage('Build Docker Images') {
steps {
script {
// Build frontend image
if (fileExists('frontend/Dockerfile')) {
sh '''
echo "Building frontend Docker image..."
echo "Dockerfile content:"
cat frontend/Dockerfile
echo "---"
# Force rebuild without cache to ensure fresh build
docker build --no-cache -t "$FRONTEND_IMAGE:$IMAGE_TAG" -t "$FRONTEND_IMAGE:latest" ./frontend
'''
} else {
echo "Frontend Dockerfile not found, skipping frontend build"
}
// Build backend image
if (fileExists('backend/Dockerfile')) {
sh '''
echo "Building backend Docker image..."
echo "Dockerfile content:"
cat backend/Dockerfile
echo "---"
# Force rebuild without cache to ensure fresh build
docker build --no-cache -t "$BACKEND_IMAGE:$IMAGE_TAG" -t "$BACKEND_IMAGE:latest" ./backend
'''
} else {
echo "Backend Dockerfile not found, skipping backend build"
}
}
}
}
stage('Test Docker Images') {
steps {
script {
try {
// Test frontend image
if (sh(script: "docker images -q \"\$FRONTEND_IMAGE:\$IMAGE_TAG\"", returnStdout: true).trim()) {
sh '''
echo "Testing frontend Docker image..."
# Clean up any existing test containers
docker stop frontend-test 2>/dev/null || true
docker rm frontend-test 2>/dev/null || true
# Find an available port for frontend
PORT1=5000
while netstat -tulpn | grep -q ":$PORT1 "; do
PORT1=$((PORT1 + 2))
done
echo "Using port $PORT1 for frontend testing..."
# Run frontend container in background
docker run -d --name frontend-test -p $PORT1:5000 "$FRONTEND_IMAGE:$IMAGE_TAG"
# Wait for container to start
echo "Waiting for frontend container to start..."
sleep 15
# Check if container is running
if docker ps | grep -q frontend-test; then
echo "Frontend container is running successfully on port $PORT1"
# Test if the application is responding
echo "Testing frontend health..."
timeout 30 bash -c "until curl -f http://localhost:$PORT1 >/dev/null 2>&1; do sleep 2; done" || echo "Frontend health check timeout - this might be normal if no health endpoint exists"
# Check container logs for any obvious errors
echo "Frontend container logs:"
docker logs frontend-test --tail 20
else
echo "Frontend container failed to start"
docker logs frontend-test
exit 1
fi
# Clean up
docker stop frontend-test
docker rm frontend-test
'''
} else {
echo "Frontend image not found, skipping frontend test"
}
// Test backend image
if (sh(script: "docker images -q \"\$BACKEND_IMAGE:\$IMAGE_TAG\"", returnStdout: true).trim()) {
sh '''
echo "Testing backend Docker image..."
# Clean up any existing test containers
docker stop backend-test 2>/dev/null || true
docker rm backend-test 2>/dev/null || true
# Find an available port for backend
PORT2=5001
while netstat -tulpn | grep -q ":$PORT2 "; do
PORT2=$((PORT2 + 2))
done
echo "Using port $PORT2 for backend testing..."
# Run backend container in background
docker run -d --name backend-test -p $PORT2:5001 "$BACKEND_IMAGE:$IMAGE_TAG"
# Wait for container to start
echo "Waiting for backend container to start..."
sleep 15
# Check if container is running
if docker ps | grep -q backend-test; then
echo "Backend container is running successfully on port $PORT2"
# Test if the application is responding
echo "Testing backend health..."
timeout 30 bash -c "until curl -f http://localhost:$PORT2 >/dev/null 2>&1; do sleep 2; done" || echo "Backend health check timeout - this might be normal if no health endpoint exists"
# Check container logs for any obvious errors
echo "Backend container logs:"
docker logs backend-test --tail 20
else
echo "Backend container failed to start"
docker logs backend-test
exit 1
fi
# Clean up
docker stop backend-test
docker rm backend-test
'''
} else {
echo "Backend image not found, skipping backend test"
}
} catch (Exception e) {
echo "Image tests failed: ${e.getMessage()}"
currentBuild.result = 'UNSTABLE'
// Clean up on failure
sh '''
docker stop frontend-test backend-test 2>/dev/null || true
docker rm frontend-test backend-test 2>/dev/null || true
'''
}
}
}
}
stage('Push Docker Images') {
when {
// Only push if quality gate passed and build is successful
anyOf {
expression { currentBuild.result == null }
expression { currentBuild.result == 'SUCCESS' }
}
}
steps {
script {
try {
echo "Pushing Docker images to Docker Hub..."
withDockerRegistry([credentialsId: 'docker-hub-credentials', url: 'https://index.docker.io/v1/']) {
sh '''
# Push frontend images
echo "Pushing frontend images..."
docker push "$FRONTEND_IMAGE:$IMAGE_TAG"
docker push "$FRONTEND_IMAGE:latest"
# Push backend images
echo "Pushing backend images..."
docker push "$BACKEND_IMAGE:$IMAGE_TAG"
docker push "$BACKEND_IMAGE:latest"
'''
}
echo "Successfully pushed all Docker images"
} catch (Exception e) {
echo "Failed to push Docker images: ${e.getMessage()}"
throw e
}
}
}
}
stage('Trigger CD Pipeline') {
when {
anyOf {
expression { currentBuild.result == null }
expression { currentBuild.result == 'SUCCESS' }
}
}
steps {
script {
try {
echo "Triggering CD pipeline for automatic deployment..."
// Get the current build number to use as image tag
def imageTag = env.BUILD_NUMBER
// Trigger CD pipeline for dev environment automatically
build job: 'email-gen-app-cd',
parameters: [
string(name: 'IMAGE_TAG', value: imageTag),
booleanParam(name: 'SKIP_TESTS', value: false)
],
wait: false // Don't wait for CD to complete
echo "CD pipeline triggered successfully"
echo "Image tag: ${imageTag}"
echo "Frontend image: ${env.FRONTEND_IMAGE}:${imageTag}"
echo "Backend image: ${env.BACKEND_IMAGE}:${imageTag}"
echo "CD pipeline will deploy to ports - Frontend: 5000, Backend: 5001, DB: 3306"
} catch (Exception e) {
echo "Failed to trigger CD pipeline: ${e.getMessage()}"
echo "You can manually trigger the CD pipeline with the following parameters:"
echo "- IMAGE_TAG: ${env.BUILD_NUMBER}"
echo "- SKIP_TESTS: false"
echo "Images available for deployment:"
echo "- Frontend: ${env.FRONTEND_IMAGE}:${env.BUILD_NUMBER}"
echo "- Backend: ${env.BACKEND_IMAGE}:${env.BUILD_NUMBER}"
// Don't fail the CI pipeline if CD trigger fails
}
}
}
}
}
post {
always {
script {
// Ensure we're in a node context for cleanup
try {
// Clean up virtual environment
sh '''
if [ -d "$VENV" ]; then
rm -rf "$VENV"
echo "Cleaned up virtual environment"
fi
'''
// Clean up SonarQube working directory
sh '''
if [ -d ".sonar" ]; then
rm -rf .sonar
echo "Cleaned up SonarQube working directory"
fi
'''
// Clean up coverage reports
sh '''
if [ -d "coverage-reports" ]; then
rm -rf coverage-reports
echo "Cleaned up coverage reports"
fi
'''
// Clean up Docker images to save space
sh '''
echo "Cleaning up Docker images..."
docker image prune -f
# Clean up any remaining test containers
docker stop frontend-test backend-test 2>/dev/null || true
docker rm frontend-test backend-test 2>/dev/null || true
'''
} catch (Exception e) {
echo "Cleanup failed: ${e.getMessage()}"
}
}
}
success {
echo 'Pipeline completed successfully!'
echo "Frontend image: ${env.FRONTEND_IMAGE}:${env.IMAGE_TAG}"
echo "Backend image: ${env.BACKEND_IMAGE}:${env.IMAGE_TAG}"
echo "SonarQube analysis completed. Check ${env.SONAR_HOST_URL}/dashboard?id=${env.SONAR_PROJECT_KEY}"
echo "Triggered by commit: ${env.GIT_COMMIT_SHORT} by ${env.GIT_AUTHOR}"
echo "Commit message: ${env.GIT_COMMIT_MESSAGE}"
}
failure {
echo 'Pipeline failed!'
echo "Failed commit: ${env.GIT_COMMIT_SHORT} by ${env.GIT_AUTHOR}"
echo "Commit message: ${env.GIT_COMMIT_MESSAGE}"
}
unstable {
echo 'Pipeline completed but tests or quality gate failed!'
echo "Unstable commit: ${env.GIT_COMMIT_SHORT} by ${env.GIT_AUTHOR}"
}
}
}
Instructions to Create and Test the Pipeline
Login to Jenkins dashboard.
Click βNew Itemβ.
Enter name:
email-gen-ci
Select Pipeline, click OK.
Scroll to Pipeline section at bottom.
Paste the above Groovy script.
Configure the following credentials and environment values in Jenkins:
bitbucket-credentials-id
β your Bitbucket username/password or token IDdockerhub-credentials-id
β your Docker Hub username/password credentials IDReplace
your-dockerhub-username
with your actual Docker Hub usernameReplace Bitbucket repo URL with your repository URL
SonarQubeServer
β configured name under Manage Jenkins > Configure System > SonarQube servers
Click βSaveβ.
Click βBuild Nowβ to test the pipeline.
Verify the CI Pipeline
Build starts and shows each stage in Blue Ocean or classic view
Checkout pulls your Bitbucket code
Code Quality Analysis stage executes SonarQube scan and updates Sonar dashboard
Build stage builds Docker image
Test stage runs container and executes pytest tests inside
Push Docker Image stage pushes image to Docker Hub
β Step 7: Setup CD Pipeline
pipeline {
agent any
parameters {
string(
name: 'IMAGE_TAG',
defaultValue: 'latest',
description: 'Docker image tag to deploy (e.g., latest, 123, v1.0.0)'
)
booleanParam(
name: 'SKIP_TESTS',
defaultValue: false,
description: 'Skip deployment tests'
)
}
environment {
DOCKER_HUB_USER = 'YOUR-DOCKER-HUB-USERNAME'
FRONTEND_IMAGE = "${DOCKER_HUB_USER}/email-subject-generator-frontend"
BACKEND_IMAGE = "${DOCKER_HUB_USER}/email-subject-generator-backend"
DOCKER_HUB_CREDENTIALS = 'docker-hub-credentials'
// Fixed ports for single deployment
FRONTEND_PORT = '5000'
BACKEND_PORT = '5001'
DB_PORT = '3306'
}
stages {
stage('Validate Parameters') {
steps {
script {
echo "=== Deployment Configuration ==="
echo "Image Tag: ${params.IMAGE_TAG}"
echo "Skip Tests: ${params.SKIP_TESTS}"
echo "Frontend Image: ${env.FRONTEND_IMAGE}:${params.IMAGE_TAG}"
echo "Backend Image: ${env.BACKEND_IMAGE}:${params.IMAGE_TAG}"
echo "Ports - Frontend: ${env.FRONTEND_PORT}, Backend: ${env.BACKEND_PORT}, DB: ${env.DB_PORT}"
}
}
}
stage('Clone Repository') {
steps {
script {
checkout([
$class: 'GitSCM',
branches: [[name: '*/main']],
userRemoteConfigs: [[
url: 'https://sachindumalshan@bitbucket.org/sachindu-work-space/email-subject-generator.git',
credentialsId: 'bitbucket-creds'
]],
extensions: [
[$class: 'CleanBeforeCheckout']
]
])
echo "Repository cloned successfully"
}
}
}
stage('Prepare Environment') {
steps {
script {
// Check if docker-compose.yml file exists
if (!fileExists('docker-compose.yml')) {
echo "docker-compose.yml file not found. Creating default configuration..."
// Create deployment-ready docker-compose file
def composeContent = """version: '3.9'
services:
frontend:
image: ${env.FRONTEND_IMAGE}:${params.IMAGE_TAG}
container_name: emailgen-frontend
ports:
- "${env.FRONTEND_PORT}:5000"
depends_on:
- backend
environment:
- BACKEND_URL=http://backend:5001
networks:
- emailgen-net
restart: unless-stopped
backend:
image: ${env.BACKEND_IMAGE}:${params.IMAGE_TAG}
container_name: emailgen-backend
ports:
- "${env.BACKEND_PORT}:5001"
depends_on:
- db
environment:
- MYSQL_HOST=db
- MYSQL_PORT=3306
- MYSQL_DATABASE=email_generator
- MYSQL_USER=root
- MYSQL_PASSWORD=rootpassword123
- GROQ_API_KEY=YOUR-GROQ-API-KEY
networks:
- emailgen-net
restart: unless-stopped
db:
image: mysql:8.0
container_name: emailgen-db
restart: unless-stopped
environment:
- MYSQL_ROOT_PASSWORD=rootpassword123
- MYSQL_DATABASE=email_generator
volumes:
- ./database/init.sql:/docker-entrypoint-initdb.d/init.sql
- mysql_data:/var/lib/mysql
ports:
- "${env.DB_PORT}:3306"
networks:
- emailgen-net
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-p=rootpassword123"]
interval: 10s
timeout: 5s
retries: 5
volumes:
mysql_data:
networks:
emailgen-net:
driver: bridge
"""
// Write the compose file
writeFile file: "docker-compose.yml", text: composeContent
echo "Docker-compose file created with deployment configuration"
} else {
echo "docker-compose.yml file found. Using existing configuration..."
// Check if existing compose file uses build or image
def composeContent = readFile('docker-compose.yml')
if (composeContent.contains('build:') && !composeContent.contains('image:')) {
echo "Existing compose file uses 'build' configuration."
echo "Setting environment variables for potential image override..."
// Set environment variables that can be used if needed
env.FRONTEND_IMAGE = env.FRONTEND_IMAGE ?: 'emailgen-frontend'
env.BACKEND_IMAGE = env.BACKEND_IMAGE ?: 'emailgen-backend'
env.IMAGE_TAG = params.IMAGE_TAG ?: 'latest'
echo "Environment variables set:"
echo "FRONTEND_IMAGE: ${env.FRONTEND_IMAGE}"
echo "BACKEND_IMAGE: ${env.BACKEND_IMAGE}"
echo "IMAGE_TAG: ${env.IMAGE_TAG}"
}
// Display existing compose file content for verification
echo "Current docker-compose.yml content:"
sh 'cat docker-compose.yml'
}
}
}
}
stage('Pull Docker Images') {
steps {
script {
try {
echo "Pulling Docker images..."
withDockerRegistry([credentialsId: 'docker-hub-credentials', url: 'https://index.docker.io/v1/']) {
sh """
echo "Pulling frontend image..."
docker pull ${env.FRONTEND_IMAGE}:${params.IMAGE_TAG}
echo "Pulling backend image..."
docker pull ${env.BACKEND_IMAGE}:${params.IMAGE_TAG}
echo "Pulling MySQL image..."
docker pull mysql:8.0
"""
}
echo "All images pulled successfully"
} catch (Exception e) {
echo "Failed to pull images: ${e.getMessage()}"
throw e
}
}
}
}
stage('Stop Previous Deployment') {
steps {
script {
try {
echo "Stopping previous deployment..."
sh """
# Stop and remove existing containers
docker-compose down --remove-orphans || true
# Clean up any dangling containers
docker stop emailgen-frontend emailgen-backend emailgen-db 2>/dev/null || true
docker rm emailgen-frontend emailgen-backend emailgen-db 2>/dev/null || true
echo "Previous deployment stopped successfully"
"""
} catch (Exception e) {
echo "Warning: Failed to stop previous deployment: ${e.getMessage()}"
echo "This might be normal if it's the first deployment"
}
}
}
}
stage('Deploy Application') {
steps {
script {
try {
echo "Deploying application..."
sh """
# Deploy using docker-compose
docker-compose up -d
echo "Deployment initiated successfully"
# Wait for services to start
echo "Waiting for services to start..."
sleep 30
# Check if containers are running
echo "Checking container status..."
docker-compose ps
"""
} catch (Exception e) {
echo "Deployment failed: ${e.getMessage()}"
throw e
}
}
}
}
stage('Health Check') {
when {
expression { !params.SKIP_TESTS }
}
steps {
script {
try {
echo "Performing health checks..."
sh """
# Wait for applications to be ready
echo "Waiting for applications to be ready..."
sleep 45
# Check if containers are running
echo "=== Container Status ==="
docker ps | grep emailgen || echo "No containers found"
# Check container logs for errors
echo "=== Backend Logs ==="
docker logs emailgen-backend --tail 20 || echo "Cannot get backend logs"
echo "=== Frontend Logs ==="
docker logs emailgen-frontend --tail 20 || echo "Cannot get frontend logs"
echo "=== Database Logs ==="
docker logs emailgen-db --tail 20 || echo "Cannot get database logs"
# Debug database connection
echo "=== Database Connection Debug ==="
docker exec emailgen-backend printenv | grep MYSQL || echo "No MYSQL env vars found"
echo "=== Testing Database ==="
docker exec emailgen-db mysql -u root -prootpassword123 -e "SHOW DATABASES;" || echo "Database connection failed"
# Test application endpoints
echo "=== Health Check Tests ==="
# Test backend health endpoint
echo "Testing backend health at http://localhost:${env.BACKEND_PORT}/api/health"
timeout 60 bash -c 'until curl -f http://localhost:${env.BACKEND_PORT}/api/health >/dev/null 2>&1; do sleep 5; done' || echo "Backend health check failed"
# Test frontend
echo "Testing frontend at http://localhost:${env.FRONTEND_PORT}"
timeout 60 bash -c 'until curl -f http://localhost:${env.FRONTEND_PORT} >/dev/null 2>&1; do sleep 5; done' || echo "Frontend health check failed"
# Test database connection
echo "Testing database connection..."
timeout 30 bash -c 'until docker exec emailgen-db mysqladmin ping -h localhost --silent; do sleep 2; done' || echo "Database health check failed"
echo "Health checks completed"
"""
} catch (Exception e) {
echo "Health check failed: ${e.getMessage()}"
currentBuild.result = 'UNSTABLE'
}
}
}
}
stage('Deployment Verification') {
steps {
script {
try {
echo "Verifying deployment..."
sh """
# Final verification
echo "=== Final Deployment Status ==="
docker-compose ps
# Check if all services are up
RUNNING_SERVICES=\$(docker-compose ps --services --filter "status=running" | wc -l)
TOTAL_SERVICES=\$(docker-compose ps --services | wc -l)
echo "Running services: \$RUNNING_SERVICES/\$TOTAL_SERVICES"
if [ "\$RUNNING_SERVICES" -eq "\$TOTAL_SERVICES" ]; then
echo "β
All services are running successfully"
else
echo "β Some services are not running"
exit 1
fi
"""
} catch (Exception e) {
echo "Deployment verification failed: ${e.getMessage()}"
throw e
}
}
}
}
}
post {
success {
echo "=== Deployment Successful ==="
echo "Image Tag: ${params.IMAGE_TAG}"
echo "Frontend URL: http://localhost:${env.FRONTEND_PORT}"
echo "Backend URL: http://localhost:${env.BACKEND_PORT}"
echo "Database Port: ${env.DB_PORT}"
echo "=== Services ==="
script {
try {
sh "docker-compose ps"
} catch (Exception e) {
echo "Could not display final service status"
}
}
}
always {
script {
try {
// Clean up unused images
sh """
docker image prune -f
echo "Cleaned up unused images"
"""
} catch (Exception e) {
echo "Cleanup failed: ${e.getMessage()}"
}
}
}
failure {
echo "=== Deployment Failed ==="
echo "Image Tag: ${params.IMAGE_TAG}"
echo "Check the logs above for details"
script {
try {
// Show container logs on failure
sh """
echo "=== Container Logs on Failure ==="
docker logs emailgen-backend --tail 30 2>/dev/null || echo "No backend logs"
docker logs emailgen-frontend --tail 30 2>/dev/null || echo "No frontend logs"
docker logs emailgen-db --tail 30 2>/dev/null || echo "No database logs"
"""
} catch (Exception e) {
echo "Could not retrieve container logs"
}
}
}
unstable {
echo "=== Deployment Completed with Issues ==="
echo "The application was deployed but some health checks failed"
echo "Please check the application manually"
}
}
}
Instructions to Create CD Pipeline
Login to Jenkins dashboard
Click βNew Itemβ
Enter name:
email-gen-cd
Select Pipeline, click OK
Scroll down to Pipeline section
Paste your provided pipeline Groovy script (above)
Click βSaveβ
Final Verification
Access application:
Webapp URL: http://<server-ip>:5000
Backend URL: http://<server-ip>:5001
Check running containers:
docker ps
β Step 8: Configure Bit bucket Web-hook with Jenkins
Setting up a webhook enables automatic triggering of pipelines when changes are pushed to the repository. In this project, pushing or updating the main
branch:
Runs the CI pipeline automatically
After CI completion, triggers the CD pipeline
Deploys the updated application seamlessly
Instructions
Go to your Bitbucket repository.
Click on Repository settings in the sidebar.
Under Workflow, click Webhooks.
Click βAdd Webhookβ.
Configure the webhook with the following:
Title: Jenkins CI/CD Trigger
URL: http://:8080/bitbucket-hook/
Triggers: Repository push
- Save the webhook configuration.
Verify Webhook Functionality
If the webhook is configured correctly:
A successful POST request status (200) will display in Bitbucket webhook page after pushing code.
Jenkins will show a new CI pipeline build triggered automatically.
Upon CI pipeline success, the CD pipeline will start automatically, deploying the application.
β Step 9: Update Repository with New Application β Email Subject Generator (Gen AI App)
After verifying that the CI/CD pipeline works correctly with your basic multi-tier setup, we now update the repository with the actual production application code β the Email Subject Generator (Gen AI App).
π Whatβs Being Deployed Now?
This new version of the app uses Generative AI techniques to analyze the scenario input and generate a suitable subject line for emails.
Application Features
Generate Email Subject
View History
βCommon Errors & Fixes
Below is a list of the key issues encountered during the CI/CD setup and deployment of the Email Subject Generator (Gen AI App), along with the solutions applied.
β Error 1: (1045) Access denied for user 'sachindu'@'localhost'
π Cause: User does not exist or lacks privileges.
β Solution A β Create New User
sudo mysql -u root -p
CREATE USER 'sachindu'@'localhost' IDENTIFIED BY 'your_password';
GRANT ALL PRIVILEGES ON email_generator.* TO 'sachindu'@'localhost';
FLUSH PRIVILEGES;
β Update .env
MYSQL_USER=sachindu
MYSQL_PASSWORD=your_password
β Update app.py database configuration accordingly.
β Solution B β Use Root User
sudo mysql -u root -p
GRANT ALL PRIVILEGES ON email_generator.* TO 'root'@'localhost';
FLUSH PRIVILEGES;
β Update .env
MYSQL_USER=root
MYSQL_PASSWORD=rootpassword123
β Error 2: MySQL Authentication Issues (caching_sha2_password)
π Cause: MySQL 8.0 uses incompatible default auth plugin.
β
Solution: Ensure .env and docker-compose use correct root credentials only. Use mysql_native_password
if needed.
β Error 3: CI/CD β Container Running Error (Backend can't connect to DB)
π Cause: Database not correctly connected with backend container.
β Solution:
Verify MYSQL_HOST, USER, PASSWORD, DATABASE in .env.
Ensure database container is healthy and accessible before backend starts.
Confirm docker-compose depends_on is configured for backend -> db.
β Error 4: SonarQube Installation Issues
π Cause: Online installer fails or times out in some environments.
β Solution: Use offline local testing method, such as:
Download and extract SonarQube manually.
Start locally using:
./bin/linux-x86-64/sonar.sh start
- Access via localhost:9000 for analysis testing.
π¬ Feedback & Suggestions
I always welcome your feedback and suggestions to improve these projects and pipelines further.
β¨ Love to hear your thoughts!
Subscribe to my newsletter
Read articles from Sachindu Malshan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
