Laravel Meets AWS ECS: Automate Deployments Like a CI/CD Ninja!

Table of contents

Setting Up a CI/CD Pipeline for Laravel Applications with AWS ECS
In today's fast-paced development environment, implementing a robust Continuous Integration and Continuous Deployment (CI/CD) pipeline is essential for delivering high-quality applications efficiently. This comprehensive guide will walk you through setting up a complete CI/CD pipeline for deploying Laravel applications on Amazon's Elastic Container Service (ECS).
TL;DR:
This guide walks you through setting up a CI/CD pipeline for deploying Laravel applications on AWS ECS using GitHub Actions. The pipeline automates code deployment by:
Pushing Code → Triggers GitHub Actions
Building a Docker Image → Pushed to Amazon ECR
Registering a New ECS Task → Updates the ECS service
Deploying via Load Balancer → Ensuring High Availability
It covers AWS setup, including IAM roles, Secrets Manager, ECR, ECS clusters, load balancers, and auto-scaling. Finally, it includes Dockerfile and Nginx configuration for Laravel deployment. 🚀
What is AWS ECS and ECR?
Before diving into how to use AWS ECS and ECR for deploying Laravel applications, it's essential to understand what these services are and how they work together in the AWS ecosystem.
Amazon Elastic Container Service (ECS)
Amazon ECS is a fully managed container orchestration service that helps you run and scale Docker containers on AWS. It allows you to easily deploy, manage, and scale containerized applications using a highly reliable and scalable infrastructure. ECS handles the scheduling of containers across a cluster of EC2 instances (or using Fargate for serverless containers) and automates many operational tasks, making it an ideal choice for managing microservices, batch jobs, and other containerized workloads.
ECS provides:
Container orchestration: Manage and scale Docker containers in a cluster of EC2 instances.
Task definitions: Define the application specifications such as Docker images, networking configurations, and resource requirements.
Cluster management: ECS abstracts the underlying infrastructure, making it easy to manage containers and resources.
With ECS, you can easily automate deployment pipelines, ensure scalability, and manage the lifecycle of your application in a cloud-native way.
Amazon Elastic Container Registry (ECR)
Amazon ECR is a fully managed Docker container registry that allows you to store, manage, and deploy Docker container images. It integrates seamlessly with ECS, simplifying the process of storing and retrieving container images during deployment. ECR eliminates the need to manage your own container registry infrastructure and provides built-in security and scalability features, making it easier to manage containerized applications.
Key benefits of ECR include:
Fully managed: ECR handles all aspects of container image storage, including scaling and security.
Secure storage: Supports encryption of your images and integrates with AWS Identity and Access Management (IAM) for access control.
Fast and reliable: Optimized for fast image pulls, ensuring that your deployment process is smooth and efficient.
Integration with ECS and other AWS services: ECR is designed to work seamlessly with ECS, making it easier to deploy and manage your containers.
In short, ECR allows you to store Docker images securely and make them accessible for deployment, while ECS manages the deployment, scaling, and orchestration of these containerized applications.
By using ECS and ECR together, you can automate the entire lifecycle of containerized applications—from building Docker images to deploying them with a fully managed infrastructure, all within the AWS cloud.
Prerequisites
Before we begin, ensure you have the following:
An AWS account
A GitHub repository with your Laravel application
Basic understanding of Docker, Laravel, and AWS services
AWS CLI installed on your local machine
GitHub account with access to GitHub Actions
Architecture Overview
Our CI/CD pipeline will follow this workflow:
Developer pushes code to the GitHub repository
GitHub Actions workflow is triggered
The workflow builds a Docker image for the Laravel application
The image is pushed to Amazon ECR
A new ECS task definition is registered with the updated image
The ECS service is updated to use the new task definition
The application is deployed through a load balancer for high availability
Setting Up AWS Infrastructure
Creating an IAM User with Required Permissions
First, let's create an IAM user with the necessary permissions for our CI/CD pipeline:
Navigate to the IAM console in your AWS account
Click on "Users" and then "Create user"
Enter a name for your user (e.g., "laravel-ecs-cicd-user")
Click "Next: Permissions"
Select "Attach existing policies directly"
Search and select the following policies:
AmazonEC2ContainerRegistryFullAccess
AmazonECS_FullAccess
AmazonECSTaskExecutionRolePolicy
SecretsManagerReadWrite
Click "Next: Tags" (add tags if needed)
Click "Next: Review"
Click “Create User” to initiate user creation.
Navigate back to the Users List and select the newly created user.
Open the Security Credentials tab.
Scroll down to the Access Keys section and click “Create Access Key.”
Copy and securely store the generated access keys, as they will not be displayed again.
Setting Up AWS Secrets Manager
Next, let's store our Laravel environment variables securely in AWS Secrets Manager:
Navigate to the AWS Secrets Manager console
Click "Store a new secret"
Select "Other type of secrets"
Enter your Laravel environment variables as key-value pairs JSON format (You can convert using ChatGPT):
{ "APP_NAME": "Laravel", "APP_ENV": "local", "APP_KEY": "base64:aWOTx31pPGbitApUzF6luBpFRtBxbLQHjardLfZKYTE=", "APP_DEBUG": "true", "APP_TIMEZONE": "UTC", "APP_URL": "http://localhost", "APP_LOCALE": "en", "APP_FALLBACK_LOCALE": "en", "APP_FAKER_LOCALE": "en_US", "APP_MAINTENANCE_DRIVER": "file", "BCRYPT_ROUNDS": "12", "LOG_CHANNEL": "stack", "LOG_STACK": "single", "LOG_DEPRECATIONS_CHANNEL": "null", "LOG_LEVEL": "debug", "DB_CONNECTION": "mysql", "DB_HOST": "localhost", "DB_PORT": "3306", "DB_DATABASE": "laravel", "DB_USERNAME": "root", "DB_PASSWORD": "", "SESSION_DRIVER": "database", "SESSION_LIFETIME": "120", "SESSION_ENCRYPT": "false", "SESSION_PATH": "/", "SESSION_DOMAIN": "null", "BROADCAST_CONNECTION": "log", "FILESYSTEM_DISK": "local", "QUEUE_CONNECTION": "database", "CACHE_STORE": "database", "CACHE_PREFIX": "", "MEMCACHED_HOST": "127.0.0.1", "REDIS_CLIENT": "phpredis", "REDIS_HOST": "127.0.0.1", "REDIS_PASSWORD": "null", "REDIS_PORT": "6379", "MAIL_MAILER": "log", "MAIL_HOST": "127.0.0.1", "MAIL_PORT": "2525", "MAIL_USERNAME": "null", "MAIL_PASSWORD": "null", "MAIL_ENCRYPTION": "null", "MAIL_FROM_ADDRESS": "hello@example.com", "MAIL_FROM_NAME": "Laravel", "AWS_ACCESS_KEY_ID": "", "AWS_SECRET_ACCESS_KEY": "", "AWS_DEFAULT_REGION": "us-east-1", "AWS_BUCKET": "", "AWS_USE_PATH_STYLE_ENDPOINT": "false", "VITE_APP_NAME": "Laravel" }
Click "Next"
Name your secret (e.g., "YourApp-ENV")
Add a description (optional)
Click "Next"
Leave rotation settings at defaults and click "Next"
Review and click "Store"
Note the ARN of your secret:
arn:aws:secretsmanager:region:account-id:secret:YourApp-ENV-xxxxxx
Creating an ECR Repository
Now, let's create an ECR repository to store our Docker images:
Navigate to the Amazon ECR console
Click "Create repository"
Enter a name for your repository (e.g., "your-app-name")
Configure settings as needed (typically defaults are fine)
Click "Create repository"
Note the repository URI:
account-id.dkr.ecr.region.amazonaws.com/your-app-name
Setting Up the ECS Cluster
Next, let's create our ECS cluster:
Navigate to the Amazon ECS console
Click "Create Cluster"
Select "Networking only" (we'll use Fargate)
Click "Next step"
Enter a cluster name (e.g., "your-app-cluster")
Leave other options at defaults
Click "Create"
Creating a Target Group and Load Balancer
Now, let's set up a load balancer to distribute traffic to our application:
Navigate to the EC2 console, then to "Target Groups"
Click "Create target group"
Choose "IP addresses" as target type (for Fargate)
Enter a name for your target group (e.g., "your-app-tg")
Set protocol to HTTP and port to 80
Select your VPC
Set health check path to
/up
(this matches our healthcheck in the task definition)Configure advanced settings if needed
Click "Create"
Note the ARN of your target group
Next, create the Application Load Balancer:
Navigate to the EC2 console, then to "Load Balancers"
Click "Create Load Balancer"
Select "Application Load Balancer"
Enter a name for your load balancer (e.g., "your-app-alb")
Select "internet-facing"
Select your VPC and at least two subnets from different availability zones
Create or select a security group that allows HTTP/HTTPS traffic
Configure listeners:
Add a listener on port 80
Optionally add a listener on port 443 with SSL/TLS certificate
Configure routing to the target group you created earlier
Review and create the load balancer
Note the DNS name of your load balancer
Creating the ecsTaskExecutionRole
Before your ECS tasks can run properly, you need to create the ecsTaskExecutionRole that was referenced in the task definition:
Navigate to the IAM console in your AWS account
Click on "Roles" and then "Create role"
Select "AWS service" as the trusted entity type
Choose "Elastic Container Service" from the service list
Select "Elastic Container Service Task" as the use case
Click "Next: Permissions"
Search for and select the following policies:
AmazonECSTaskExecutionRolePolicy (this gives the role permission to pull images and send logs)
SecretsManagerReadWrite (if your tasks need to access Secrets Manager)
Click "Next: Tags" (add tags if needed)
Click "Next: Review"
Enter "ecsTaskExecutionRole" as the role name
Add a description such as "Allows ECS tasks to call AWS services on your behalf"
Click "Create role"
This role allows ECS to:
Pull container images from ECR
Send container logs to CloudWatch Logs
Access secrets from AWS Secrets Manager (if needed by your application)
Make sure to note the ARN of this role as you'll need it for your task definition:
arn:aws:iam::your-account-id:role/ecsTaskExecutionRole
You'll need to update your task-definition.json file with this ARN in the "executionRoleArn" field.
Creating the ECS Service
Before running your GitHub Actions workflow, you need to create an ECS service:
Navigate to the ECS console, select your cluster
Click "Create service"
Select "FARGATE" as the launch type
Enter a service name (e.g., "your-app-service")
Set Number of tasks to 1 (or more for high availability)
Leave other options at defaults
Click "Next step"
Configure networking:
Select your VPC
Select at least two subnets in different availability zones
Select a security group that allows inbound traffic on port 80 from the load balancer
Click "Next step"
Configure load balancing:
Select "Application Load Balancer"
Select your load balancer
Add your container to the load balancer with port 80
Select your target group
Configure health check grace period if needed
Click "Next step"
Configure auto scaling if needed (optional)
Click "Next step"
Review and click "Create service"
Preparing Your Laravel Application
Dockerfile Setup
Create a Dockerfile
in the root of your Laravel project:
ARG COMPOSER_VERSION=2.7
ARG PHP_VERSION=8.2
ARG NODE_VERSION=20
###########################################
# Prepare vendor images
###########################################
FROM composer:${COMPOSER_VERSION} AS vendor
# Node.js installation
FROM node:${NODE_VERSION}-alpine AS node
###########################################
# Build Backend and running web server
###########################################
# Use PHP 8.2 FPM Alpine as the base image
FROM php:${PHP_VERSION}-fpm-alpine AS server
ARG PROJECT_DIR=/var/www/html
ENV TZ=Australia/Sydney
# Set the working directory inside the container
WORKDIR $PROJECT_DIR
# Install system dependencies and PHP extensions
RUN apk add --no-cache nginx libpng libzip icu supervisor bash \
&& apk add --no-cache --virtual .build-deps \
$PHPIZE_DEPS \
libpng-dev \
libzip-dev \
icu-dev \
&& docker-php-ext-install opcache pdo pdo_mysql zip gd intl pcntl \
&& docker-php-ext-configure opcache --enable-opcache \
&& apk del .build-deps \
&& rm -rf /var/cache/apk/*
# Install and configure cron
RUN apk add --no-cache dcron \
&& mkdir -p /var/log/cron \
&& touch /var/log/cron/cron.log
RUN apk add --no-cache tzdata \
&& cp /usr/share/zoneinfo/$TZ /etc/localtime \
&& echo $TZ > /etc/timezone \
&& apk del tzdata
RUN apk add --no-cache curl
# Copy Composer from its image
COPY --chown=www-data:www-data --from=vendor /usr/bin/composer /usr/bin/composer
COPY --chown=www-data:www-data composer.json composer.lock ./
# copy .env file
COPY --chown=www-data:www-data .env .env
RUN composer install --no-dev --optimize-autoloader --no-scripts
# Copy Node.js from its image
COPY --from=node /usr/local /usr/local
# Install NPM dependencies
COPY --chown=www-data:www-data package.json package-lock.json ./
RUN npm cache clean --force \
&& npm install -g yarn --force \
&& yarn
# Copy application files
COPY --chown=www-data:www-data . $PROJECT_DIR
RUN yarn build
RUN mkdir -p storage/framework/sessions \
storage/framework/views \
storage/framework/cache \
storage/framework/testing \
storage/logs \
bootstrap/cache \
&& chmod -R 775 storage bootstrap/cache
# Copy Nginx configuration file
COPY docker/nginx.conf /etc/nginx/nginx.conf
# Copy custom php.ini file to override PHP settings
COPY docker/php.ini /usr/local/etc/php/conf.d/php-custom.ini
# Copy entrypoint script
COPY docker/entrypoint.sh /usr/local/bin/entrypoint.sh
RUN chmod +x /usr/local/bin/entrypoint.sh
# Set entrypoint script to start both PHP-FPM and Nginx
ENTRYPOINT ["sh", "/usr/local/bin/entrypoint.sh"]
# Expose port 80
EXPOSE 80
Nginx Configuration
Create a directory named docker
in your project root and add an nginx.conf
file:
user nginx; # Use 'nginx' user or the default user for the image if running unprivileged
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
daemon off;
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
keepalive_timeout 65;
gzip on;
gzip_disable "msie6";
server {
listen 80; # Cloud Run default port
server_name _; # Accepts all hostnames
root /var/www/html/public; # Laravel's public directory
index index.php index.html index.htm;
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass 127.0.0.1:9000; # Assuming PHP-FPM is running locally in the container
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
fastcgi_split_path_info ^(.+?\.php)(/.+)$;
# fastcgi_param HTTPS on; # This is often needed for Laravel in production
}
location ~ /\.ht {
deny all;
}
# Static file caching rules (optional, tweak for your needs)
location ~* \.(jpg|jpeg|png|gif|ico|css|js|woff|woff2|ttf|svg|eot)$ {
expires 30d;
add_header Cache-Control "public, no-transform";
}
}
}
PHP Configuration
Add a php.ini
file in the docker
directory:
; php.ini
memory_limit = 512M
upload_max_filesize = 10M
Entrypoint Script
Add an entrypoint.sh
script in the docker
directory:
#!/bin/sh
#set -e
# Function to run migrations with error handling
run_migrations() {
echo "Running migrations"
if php artisan migrate; then
echo "Migrations completed successfully"
else
echo "Migration failed, attempting to cache resources anyway"
fi
}
# Run migrations
run_migrations
exec "$@"
# Start PHP-FPM in the background
echo "Starting PHP-FPM..."
php-fpm --nodaemonize &
PHP_FPM_PID=$!
# Start Nginx in the background
echo "Starting Nginx..."
nginx &
NGINX_PID=$!
# Wait for all background processes
wait $NGINX_PID $PHP_FPM_PID
Creating the ECS Task Definition
Create a task-definition.json
file in your project root:
{
"family": "your-app-name",
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "your-app-container",
"image": "your-account-id.dkr.ecr.your-region.amazonaws.com/your-app-name:latest",
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "your-app-logs",
"awslogs-region": "your-region",
"awslogs-stream-prefix": "ecs"
}
},
"healthCheck": {
"command": [
"CMD-SHELL",
"curl -f http://localhost:80/up || exit 1"
],
"interval": 30,
"timeout": 5,
"retries": 3,
"startPeriod": 60
}
}
],
"requiresCompatibilities": ["FARGATE"],
"cpu": "256",
"memory": "512",
"executionRoleArn": "arn:aws:iam::your-account-id:role/ecsTaskExecutionRole"
}
Be sure to create a /up
endpoint in your Laravel application that returns a 200 status code for health checks. This can be as simple as:
// In routes/web.php
Route::get('/up', function () {
return response('OK', 200);
});
Setting Up GitHub Actions Workflow
Create a .github/workflows
directory in your project root and add an ecs-deploy.yaml
file:
name: Deploy Laravel to AWS ECS
on:
push:
branches:
- main # or your production branch
jobs:
deploy:
name: Deploy to AWS ECS
runs-on: ubuntu-latest
steps:
- name: Checkout Code
uses: actions/checkout@v4
- name: Set up AWS CLI
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Check AWS CLI Configuration
run: aws sts get-caller-identity
- name: Flush DNS Cache
run: sudo systemctl restart systemd-resolved || sudo systemd-resolve --flush-caches
- name: Update Docker DNS settings
run: |
echo '{"dns":["8.8.8.8","8.8.4.4"]}' | sudo tee /etc/docker/daemon.json
sudo systemctl restart docker
- name: Retrieve Secrets from AWS Secrets Manager
id: secrets
run: |
# Get secrets in JSON format
SECRET_JSON=$(aws secretsmanager get-secret-value --secret-id ${{ secrets.AWS_SECRET_ARN }} --query SecretString --output text)
# Convert JSON to KEY=VALUE format for .env file
echo "$SECRET_JSON" | jq -r 'to_entries|map("\(.key)=\(.value|tostring)")|.[]' > .env
# Verify the .env file (don't print sensitive info in logs)
echo "Created .env file with the following keys:"
grep -v "PASSWORD\|KEY" .env | cut -d= -f1
- name: Login to Amazon ECR
id: login-ecr
run: |
aws ecr get-login-password --region ${{ secrets.AWS_REGION }} | docker login --username AWS --password-stdin ${{ secrets.ECR_REPOSITORY }}
- name: Build Docker Image
run: |
docker build -t ${{ secrets.ECR_REPOSITORY }}:latest .
- name: Push Docker Image to ECR
run: |
docker push ${{ secrets.ECR_REPOSITORY }}:latest
- name: Update ECS Task Definition
id: task-def
run: |
# Use the task-definition.json file from the project root
# Update the image in the task definition file
jq '.containerDefinitions[0].image = "${{ secrets.ECR_REPOSITORY }}:latest"' task-definition.json > updated-task-definition.json
# Register the new task definition
NEW_REVISION=$(aws ecs register-task-definition --cli-input-json file://updated-task-definition.json)
TASK_REVISION=$(echo $NEW_REVISION | jq -r '.taskDefinition.taskDefinitionArn')
echo "TASK_REVISION=$TASK_REVISION" >> $GITHUB_ENV
- name: Deploy New Task Definition
run: |
aws ecs update-service \
--cluster ${{ secrets.ECS_CLUSTER }} \
--service ${{ secrets.ECS_SERVICE }} \
--task-definition $TASK_REVISION \
--load-balancers "[{\"targetGroupArn\": \"${{ secrets.TARGET_GROUP_ARN }}\", \"containerName\": \"${{ secrets.CONTAINER_NAME }}\", \"containerPort\": 80}]"
Add the following secrets to your GitHub repository:
AWS_ACCESS_KEY_ID
: Your IAM user access keyAWS_SECRET_ACCESS_KEY
: Your IAM user secret keyAWS_REGION
: Your AWS region (e.g., ap-southeast-2)ECR_REPOSITORY
: Your ECR repository URIECS_CLUSTER
: Your ECS cluster nameECS_SERVICE
: Your ECS service nameAWS_SECRET_ARN
: The ARN of your secret in AWS Secrets ManagerTARGET_GROUP_ARN
: The ARN of your target groupCONTAINER_NAME
: The name of your container (should match the name in task-definition.json)
Testing the CI/CD Pipeline
Now it's time to test your CI/CD pipeline:
Make a change to your Laravel application
Commit and push the changes to your main branch
Monitor the GitHub Actions workflow in the "Actions" tab of your repository
Once the workflow completes successfully, check your ECS service in the AWS console
Visit your load balancer's DNS name to see your application in action
Monitoring and Troubleshooting
To monitor your application and troubleshoot issues:
CloudWatch Logs: Check the logs from your ECS task
Navigate to CloudWatch in the AWS console
Go to "Log groups" and find your application's log group
ECS Task Status: Check the status of your ECS tasks
Navigate to ECS in the AWS console
Select your cluster and service
Check the "Tasks" tab to see running tasks and their status
Load Balancer Health: Check the health of your target group
Navigate to EC2 in the AWS console
Go to "Target Groups" and select your target group
Check the "Targets" tab to see the health status of your instances
GitHub Actions Workflow: Check the workflow logs for any issues
Navigate to the "Actions" tab in your GitHub repository
Select the latest workflow run
Check the logs for each step
Best Practices
Environment Variables: Keep sensitive information in AWS Secrets Manager
Versioning: Use Git tags or commit hashes for image versioning in production
Scaling: Configure auto-scaling for your ECS service based on CPU and memory usage
Monitoring: Set up CloudWatch alarms for important metrics
Security: Use IAM roles with least privilege principles
Backup: Regularly backup your database and critical data
Testing: Implement automated testing in your CI/CD pipeline
Rollback Plan: Have a plan for rolling back deployments if issues occur
Conclusion
You have now set up a complete CI/CD pipeline for your Laravel application using AWS ECS, ECR, and GitHub Actions. This pipeline automates the build, test, and deployment process, allowing you to deliver updates to your application with confidence.
By leveraging containerization with Docker and the scalability of AWS ECS, you've created a robust infrastructure that can handle your application's growth. The integration with AWS Secrets Manager ensures that your sensitive information is kept secure, while the Application Load Balancer provides high availability and distributes traffic efficiently.
Remember to regularly review and update your pipeline as your application evolves, and consider implementing additional features such as blue/green deployments or canary releases for even more reliability.
Happy coding!
Subscribe to my newsletter
Read articles from Sohag Hasan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Sohag Hasan
Sohag Hasan
WhoAmI => notes.sohag.pro/author