3. Deploying a new application from scratch

Arindam BaidyaArindam Baidya
25 min read

Demo Creating new cluster

In this lesson, you will learn how to create a new Amazon ECS cluster using the AWS Management Console. Follow the steps below to configure your cluster with Fargate and avoid launching EC2 instances.

Step 1: Choose a Cluster Template

Click on the Create Cluster button. You will see three template options:

  1. Networking only – Ideal for using Fargate.

  2. EC2 Linux + Networking – Use this if you plan to run EC2 instances with Linux.

  3. EC2 Windows + Networking – Use this if you plan to run EC2 instances with Windows.

Since the goal is to use Fargate, select the Networking only option. Then, provide a name for your cluster, such as cluster1. You also have the option to customize your VPC configuration by modifying the default CIDR block and subnets. If no customization is required, leave these settings as they are. When you click Create, the system will automatically provision a VPC with your specified CIDR block and two subnets.

The image shows an AWS interface for creating a cluster, offering three template options: "Networking only," "EC2 Linux + Networking," and "EC2 Windows + Networking." Each option lists the resources to be created, such as clusters, VPCs, and subnets.

Step 2: Configure Networking Settings

After selecting your cluster template, adjust any additional networking settings as needed. This includes configuring the VPC, CIDR block, and subnets. You may also add tags or enable CloudWatch Container Insights. Once you have reviewed the settings, click Create to continue.

The image shows an AWS interface for creating a cluster, with options to configure networking settings such as VPC, CIDR block, and subnets. There are also fields for adding tags and enabling CloudWatch Container Insights.

Note

After configuring your networking settings, ensure that the VPC and subnet settings meet your organizational requirements.

Step 3: View Cluster Details

Once the cluster is created, click View Cluster to access its details. This action will take you to the cluster details page where you can verify the successful creation and review the configuration details.

The image shows an AWS console screen displaying the launch status of an ECS cluster, with details about the VPC, subnets, and other cluster resources.

On the details page, you will see that cluster1 is now active. The dashboard provides an overview of the cluster status, task counts, and services. Although the cluster is active, note that there are no registered container instances or running tasks yet.

The image shows an AWS ECS (Elastic Container Service) dashboard for a cluster named "cluster1," displaying details like cluster status, task counts, and service information. The cluster is active, but there are no registered container instances or running tasks.

Warning

Ensure you have reviewed all configuration parameters and networking settings before finalizing your cluster creation to avoid configuration issues later.

Demo Creating Task Definition

After setting up your cluster for deployment, the first step is to create Task Definitions that serve as a blueprint for your application. In this guide, we will walk through defining a new task, configuring container settings, and launching the task as a service on AWS Fargate.

Step 1: Create a New Task Definition

Navigate to the Task Definitions section in your AWS ECS console and select Create new Task Definition. Since the deployment target is Fargate, choose Fargate as the launch type. Provide a task definition name (for example, "ECS-Project1") and select the pre-created role.

Tip

If you have not run the initial quick start wizard, make sure to do so in order to establish the necessary permissions for ECS to manage underlying resources.

Next, choose Linux as the operating system and retain the same Execution IAM role. Then, specify the task size based on your application’s CPU and memory requirements. For this demo, select the smallest configuration available.

The image shows a configuration screen for setting up an AWS ECS task, including options for network mode, operating system family, task execution IAM role, and task size.

Step 2: Add a Container Definition

Add a container definition by entering a container name (for instance, "node_app_image") and specifying the container image you used previously. Configure the startup command by including the following check to ensure your application starts correctly:

CMD-SHELL, curl -f http://localhost/ || exit 1

Set up port mapping to expose port 3000. Although you can modify additional configurations, the defaults are sufficient for this demonstration. Click Add to include the container, then Create to finalize your task definition.

After creation, choose View Task Definition to review the blueprint. Keep in mind that this task definition outlines how your task will run but does not instantiate any tasks.

Step 3: Launch the Task Definition as a Service

To run an instance of your task definition, navigate to the Services section within your cluster. Click on your cluster (for example, "cluster one") and go to the Services tab. When creating a service, use the following settings:

  • Launch type: Fargate

  • Operating system: Linux

  • Task Definition: Select your created file (e.g., "ECS-Project1") and choose the latest revision.

  • Cluster: Use the default cluster.

  • Service Name: Enter a name such as "project1-service".

  • Number of Tasks: Specify the desired instances (for example, select two to create two task instances).

The image shows an AWS management console screen for configuring an ECS service, including options for operating system, task definition, cluster, and deployment settings.

Step 4: Configure Networking and Security

Proceed by selecting the appropriate VPC (the one created specifically for ECS) along with two subnets. Configure your security groups to control traffic for the service.

The image shows a configuration screen for setting up a network in AWS, specifically for creating a service with options for VPC, subnets, and security groups.

In the security group settings, you can edit an existing group or create a new one. For instance, create a new security group called "project." Although the default settings might allow inbound traffic on port 80, change the rule to allow custom TCP traffic on port 3000 from any source.

The image shows a configuration screen for setting up security groups in AWS, with options to create or select a security group and define inbound rules, such as allowing TCP traffic on port 3000 from any source.

Click Save to apply your changes and leave the remaining settings as default. When prompted to add a load balancer, select No thanks since load balancing will be covered in a later lesson.

Step 5: Configure Auto Scaling and Create the Service

In the subsequent step, you have the option to configure auto scaling for the service. For this demonstration, disable auto scaling. Review all configurations carefully, then click Create Service.

The image shows an AWS interface for creating a service, specifically the step for setting auto-scaling options. It offers choices to either not adjust the service's desired count or configure auto-scaling.

The image shows an AWS console screen for creating a service, detailing configurations such as cluster, launch type, operating system, and network settings. It includes options for configuring auto-scaling and reviewing service parameters.

Step 6: Review and Monitor Your Service

Once the service is created, select View Service. Initially, you might see no running tasks; however, a refresh will reveal that provisioning has begun based on the number of tasks selected (in this example, two).

The image shows an AWS ECS console displaying details of a service named "project1-service" with tasks in provisioning status. It includes information about the cluster, task definition, and launch type.

Each task is deployed as a separate instance, even though they share the same task definition. This is why unique public IP addresses are assigned to each. When you inspect a task, its status may initially display as pending before transitioning to running.

The image shows an AWS ECS console displaying details of a running task, including network information and container status. The task is part of a cluster named "cluster1" and is using the FARGATE launch type.

After the tasks are running, retrieve the public IP address from one of the tasks and test your application in a browser. You’ll observe that each task has a unique IP address.

The image shows an AWS ECS task details page, displaying information about a running task, including network settings and container status.

Scaling Consideration

For front-end applications, having a single IP or DNS name can simplify traffic management. In robust architectures, consider configuring a load balancer to distribute incoming requests evenly across tasks. This approach exposes a single endpoint while managing dynamic IP addresses behind the scenes.

Demo Deploying task revisions

In this guide, you'll learn how to update your application, rebuild and push a new Docker image to Docker Hub, and deploy the updated image using Amazon ECS. Follow along to modernize your deployment process and ensure seamless application updates.

Making Application Changes

Begin by modifying your application. For this demonstration, we update the HTML file by adding extra exclamation points to the H1 tag. The updated HTML file now appears as follows:

<!DOCTYPE html>
<html lang="en">
<head>
    <meta charset="UTF-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <link rel="stylesheet" href="css/style.css" />
    <title>Document</title>
</head>
<body>
    <h1>ECS Project 1!!!!</h1>
</body>
</html>

Next, build a new Docker image to incorporate your changes. Tag the image as "KodeKloud/ecs-project1" and execute the Docker build command. The console output may look similar to this:

user1 on user1 in ecs-project1 [!] is v1.0.0 via took 3s

Pushing the Updated Image

Once the Docker image is built successfully, push it to Docker Hub. During the push, you may notice that many layers already exist. Finally, the output provides a digest confirming a successful push:

8cba2b7a5a3d: Layer already exists
2713d9662e94: Layer already exists
72140ff0db3: Layer already exists
93a6676ff6e4: Layer already exists
c77311ff502e: Layer already exists
ba9804f7abed: Layer already exists
a5186a09280a: Layer already exists
1e69438976e4: Layer already exists
47b6660a2b9b: Layer already exists
5cbc21d1985: Layer already exists
07b905e91599: Layer already exists
20833a96725e: Layer already exists
latest: digest: sha256:98216dd964fd5bb910fb23a527ed9e9d804b5cedaaa47fb45264cebe664006b size: 3261
user1 on user1 in ecs-project1 [!] is v1.0.0 via took 4s

Now that your image is updated on Docker Hub, it's time to inform ECS of the new changes so that it pulls the latest image for your service.

Updating the ECS Service

Refreshing your application immediately after pushing the updated image may still display the old version. This happens because ECS hasn't been notified of the change. To update your deployed service, follow these steps in the ECS Console:

  1. Navigate to Clusters and select your cluster.

  2. Choose the service you wish to update.

  3. Click on Update.

  4. Enable the Force New Deployment option.

Note

Forcing new deployment instructs ECS to pull the latest image and replace the old tasks with new ones.

The diagram below shows the AWS ECS Task Details page, which provides essential information about running tasks, including network settings and container statuses:

The image shows an AWS ECS task details page, displaying information about a running task, including network settings and container status.

Additionally, observe the following diagram that illustrates a cluster with an active service:

The image shows an AWS ECS console displaying details of a cluster named "cluster1," with an active service called "project1-service" running on Fargate.

After applying the force deployment, ECS pulls the updated image and redeploys the tasks. Review the ECS Service Configuration screen as shown in the following diagram:

The image shows an AWS ECS configuration screen for setting up a service with details like task definition, launch type, operating system, and cluster information.

If you also modify the task definition file, a new revision of the task definition is created. For example, after updating "ecs-project1-taskdef", the ECS console may display multiple revisions:

The image shows a web interface displaying task definitions for "ecs-project1-taskdef" with two active revisions listed.

Then, review the updated service configuration before finalizing the update:

The image shows an AWS ECS service configuration review screen, detailing settings like cluster, launch type, task definition, and network configuration. There is an option to update the service at the bottom.

After updating the service, ECS starts new tasks using the latest image while gradually shutting down the older tasks once the new ones pass all health checks. You can monitor this transition in the ECS Tasks view.

Verifying the Deployment

After the deployment, refresh the ECS service page and inspect one of the running tasks. Keep in mind that the task's IP address changes with each new deployment. The diagram below illustrates the network details and container status of a running task:

The image shows an Amazon ECS console displaying details of a running task, including network information and container status.

Once you confirm the update on port 3000, you'll notice the additional exclamation points in the H1 tag. However, be aware that changes in IP addresses could affect clients accessing the application directly.

The Role of a Load Balancer

Note

A load balancer addresses the potential issue of changing IP addresses by providing a consistent endpoint. It automatically routes traffic to the updated task IP addresses after each deployment, ensuring uninterrupted client connectivity.

By integrating a load balancer, you simplify deployment updates while maintaining reliable access to your application, regardless of changes in the infrastructure.


Demo Deleting application

In this guide, we will walk you through the process of deleting an application that is no longer needed. Later, we will cover deploying a more advanced application that integrates a database, a volume, and a load balancer.

Steps to Delete an Application

  1. Navigate to your service interface and initiate the deletion of the application.

  2. Proceed to the tasks section to verify that the deletion was successful.

  3. Confirm that the application has been completely removed from the system.

Note

The cluster remains intact after the application deletion. This means you can reuse the same cluster to deploy other applications with additional features.

By following these steps, you can efficiently manage your resources and prepare your environment for future deployments.

Understanding multi container application

In this lesson, we explore an application architecture that leverages multiple containers to separate concerns effectively. The architecture consists of two containers:

  • An API container running a Node.js/Express application.

  • A MongoDB container using the default Mongo image.

This setup demonstrates how to configure inter-container communication using Docker Compose and how the API leverages environment variables to configure its connection to MongoDB.

Overview

The Node.js application integrates with MongoDB through Mongoose and implements a basic CRUD interface to manage notes.

Docker Compose Configuration

Below is an example Docker Compose configuration outlining the multi-container deployment. Two variants for the MongoDB volume configuration are provided based on your persistence needs.

API Container Setup

The API container is built from the project source and exposes port 3000. It uses environment variables for connecting to the MongoDB container, as illustrated by the following excerpt:

const express = require("express");
const mongoose = require("mongoose");
const cors = require("cors");
const Note = require("./models/noteModel");


const app = express();
app.use(cors());
app.use(express.json());


const mongoURL = `mongodb://${process.env.MONGO_USER}:${process.env.MONGO_PASSWORD}@${process.env.MONGO_IP}:${process.env.MONGO_PORT}/?authSource=admin`;
// Alternative connection string for local testing:
// const mongoURL = `mongodb://localhost:27017/?authSource=admin`;

Console Output Example

A sample console output upon running the application might appear as follows:

user1 on  user1 in ecs-project2 [!] is 📦 v1.0.0 via 🐳

Docker Compose Variants

Variant 1 – Using a Bind Mount

This variant uses a bind mount for the MongoDB data directory to link the container's /data/db to a local folder.

version: "3"
services:
  api:
    build: .
    image: kodekloud/ecs-project2
    environment:
      - MONGO_USER=mongo
      - MONGO_PASSWORD=password
      - MONGO_IP=mongo
      - MONGO_PORT=27017
    ports:
      - "3000:3000"
  mongo:
    image: mongo
    environment:
      - MONGO_INITDB_ROOT_USERNAME=mongo
      - MONGO_INITDB_ROOT_PASSWORD=password
    volumes:
      - ./data:/data/db

Variant 2 – Using a Named Volume

This variant uses a named volume (mongo_data) to manage the persistence of MongoDB data.

version: "3"
services:
  api:
    build: .
    image: kodekloud/ecs-project2
    environment:
      - MONGO_USER=mongo
      - MONGO_PASSWORD=password
      - MONGO_IP=mongo
      - MONGO_PORT=27017
    ports:
      - "3000:3000"
  mongo:
    image: mongo
    environment:
      - MONGO_INITDB_ROOT_USERNAME=mongo
      - MONGO_INITDB_ROOT_PASSWORD=password
    volumes:
      - mongo_data:/data/db


volumes:
  mongo_data:

Important

Ensure that the environment variables for MongoDB (MONGO_USER, MONGO_PASSWORD, MONGO_IP, and MONGO_PORT) are correctly passed to the API container. This is crucial for establishing a successful connection.

API Endpoints

The API provides a set of endpoints for performing CRUD operations on a "notes" collection. Below are details of each endpoint with corresponding code examples.

HTTP MethodEndpointDescription
GET/notesRetrieve all notes from the database
GET/notes/:idRetrieve a single note by ID
POST/notesCreate a new note entry
PATCH/notes/:idUpdate an existing note
DELETE/notes/:idDelete a specific note

GET /notes

Retrieves all notes stored in the database.

app.get("/notes", async (req, res) => {
  try {
    const notes = await Note.find();
    res.status(200).json({ notes });
  } catch (error) {
    res.status(500).json({ error: error.message });
  }
});

GET /notes/:id

Retrieves the details of a single note based on the provided ID. (The relevant code for this endpoint is implied by the overall CRUD structure.)

POST /notes

Creates a new note entry in the database.

app.post("/notes", async (req, res) => {
  try {
    const note = await Note.create(req.body);
    res.status(201).json({ note });
  } catch (e) {
    console.log(e);
    res.status(400).json({ status: "fail" });
  }
});

PATCH /notes/:id

Updates an existing note and returns the updated note if successful.

app.patch("/notes/:id", async (req, res) => {
  try {
    const note = await Note.findByIdAndUpdate(req.params.id, req.body, {
      new: true,
      runValidators: true,
    });
    if (!note) {
      return res.status(404).json({ message: "Note not found" });
    }
    res.status(200).json({ note });
  } catch (e) {
    res.status(400).json({ error: e.message });
  }
});

DELETE /notes/:id

Deletes a note corresponding to a given ID.

app.delete("/notes/:id", async (req, res) => {
  try {
    const note = await Note.findByIdAndDelete(req.params.id);
    if (!note) {
      return res.status(404).json({ message: "Note not found" });
    }
    res.status(200).json({ status: "success" });
  } catch (e) {
    console.log(e);
    res.status(400).json({ error: e.message });
  }
});

A sample console output after executing these operations might be:

user1 on  user1 in ecs-project2 [!] is 📦 v1.0.0 via 🐳

Connecting to MongoDB

The application uses Mongoose to establish a connection with the MongoDB container at startup. The connection relies on the environment variables defined earlier. Once the connection is successful, the server begins listening on port 3000.

mongoose
  .connect(mongoURL, {
    useNewUrlParser: true,
    useUnifiedTopology: true,
  })
  .then(() => {
    console.log("Successfully connected to DB");
    app.listen(3000, () =>
      console.log("Server is listening on PORT 3000")
    );
  })
  .catch((e) => {
    console.log(e);
    process.exit(1);
  });

Reminder

Make sure your environment variables for the MongoDB connection are properly set in your deployment configuration to avoid connectivity issues.

Demo Creating security group

In this guide, we will walk you through creating a security group for your Amazon Elastic Container Service (AWS ECS) application. A security group acts as a virtual firewall that controls network traffic for your devices. While this example uses a very basic rule, remember to configure only the necessary traffic for production environments.

Steps to Create a Security Group

  1. Navigate to the EC2 Dashboard and locate the Security Groups section.

  2. Open the Security Groups page in a new browser tab to simplify navigation.

  3. Click on Create Security Group to begin the configuration.

  4. Set the security group name to "ECS SG" and add a description, for example, "ECS security group".

  5. Add an inbound rule that allows all traffic from any IP.

    Warning

    Allowing all traffic is not recommended for production. It is best to define explicit rules that permit only the required traffic.

  6. Ensure that the security group is associated with the correct VPC—the one hosting your ECS cluster.

  7. Review your settings and click Create Security Group to complete the process.

By following these steps, you have successfully created a basic security group for your AWS ECS application. You can later update these rules to restrict access further as your application requirements evolve.

Note

For more in-depth information on AWS ECS and best practices for security configurations, consider exploring the AWS ECS Documentation.

Demo Creating multi container application

In this guide, we demonstrate how to build and deploy a multi-container application on Amazon ECS using Fargate. We will create a task definition, configure containers for MongoDB and an Express-based web API, mount an Elastic File System (EFS) volume for persistent MongoDB data, and launch a service behind an Application Load Balancer (ALB) for efficient traffic routing.


1. Creating the Task Definition

Begin by navigating to your ECS dashboard and selecting Task Definitions. Create a new task definition by choosing Fargate as the launch type. Give it a meaningful name (for example, "ECS-Project1"), assign the ECS task execution role, and select the smallest available memory option. After these steps, proceed to add your containers.

The image shows an AWS interface for creating a new task definition, where users can select launch type compatibility options: Fargate, EC2, or External.

The image shows an AWS management console screen for configuring an ECS task, including settings for task role, network mode, and task size. Options for task execution IAM role and memory/CPU allocation are also visible.


2. Configuring the MongoDB Container

The first container will run the MongoDB database. Configure it with the following steps:

  1. Name the container "Mongo" and select the default Mongo image from Docker Hub.

  2. Set the port mapping to 27017 (MongoDB’s default port).

  3. Define the necessary environment variables for MongoDB initialization.

The image shows a configuration screen for adding a container, with fields for container name, image, memory limits, and port mappings. It also includes advanced container configuration options like health checks.

Health Check

You can add an optional health check for MongoDB. For example:

CMD-SHELL, curl -f http://localhost/ || exit 1

Next, include the following environment variables based on your docker-compose configuration:

environment:
  - MONGO_INITDB_ROOT_USERNAME=mongo
  - MONGO_INITDB_ROOT_PASSWORD=password
  - MONGO_IP=mongo
  - MONGO_PORT=27017
ports:
  - "27017:27017"
mongo:
  image: mongo
  environment:
    - MONGO_INITDB_ROOT_USERNAME=mongo
    - MONGO_INITDB_ROOT_PASSWORD=password
  volumes:
    - mongo-db:/data/db
volumes:
  mongo-db:

Security Notice

Using "password" for MongoDB credentials is not secure and is used here solely for demonstration purposes.

The image shows a user interface for adding a container, with fields for entry point, command, working directory, and environment variables, including MongoDB credentials.

After confirming the settings, close the MongoDB container configuration.


3. Configuring the Web API Container

Next, configure the second container to host your Express-based web API:

  1. Add a new container (for example, named "web API").

  2. Use the image "ECS-Project2" from Docker Hub.

  3. Set the container to listen on port 3000.

Ensure the environment variables match those from the MongoDB container for seamless connectivity:

version: "3"
services:
  api:
    build: .
    image: kodekloud/ecs-project2
    environment:
      - MONGO_USER=mongo
      - MONGO_PASSWORD=password
      - MONGO_IP=mongo
      - MONGO_PORT=27017
    ports:
      - "3000:3000"
  mongo:
    image: mongo
    environment:
      - MONGO_INITDB_ROOT_USERNAME=mongo
      - MONGO_INITDB_ROOT_PASSWORD=password
    volumes:
      - mongo-db:/data/db

Although Docker Compose supports built-in DNS resolution between services, ECS tasks do not. Therefore, ensure the web API leverages the proper host for intra-task communication.

A typical health check command for the web API may look like this:

HEALTHCHECK
  Command: CMD-SHELL, curl -f http://localhost/ || exit 1
  Interval: second(s)

The image shows a configuration screen for adding a container, with fields for CPU units, entry point, command, and environment variables related to a MongoDB setup.

While ECS does offer options to define container dependencies (ensuring MongoDB starts before the web API), this example does not include that configuration.

The image shows a configuration interface for adding a container, with fields for setting environment variables, startup dependency ordering, container timeouts, and network settings. It includes specific entries for a MongoDB container setup.


4. Defining a Volume for MongoDB

To ensure data persistence for MongoDB, attach an EFS volume to the MongoDB container:

  1. In the ECS task definition, locate the volume section and add a new volume.

  2. Name the volume "mongo-db" and choose AWS Elastic File System (EFS) as the volume type.

  3. If no EFS is available, click the link to create a new one.

The image shows a dialog box for adding a volume in AWS, with fields for configuring options like volume type, file system ID, and access point ID.

When creating the EFS:

  • Name it "mongo-db".

  • Use the same VPC as your ECS cluster.

  • Select the proper subnets and update the security group to permit ECS container communication.

The image shows an AWS management console screen for configuring file system settings, including options for automatic backups, lifecycle management, performance mode, throughput mode, encryption, and tagging.

Create a dedicated security group for EFS (e.g., "EFS security group") and modify its inbound rules to allow NFS traffic (TCP port 2049) only from your ECS security group.

The image shows an AWS EC2 security group details page, displaying information about inbound rules, including a rule allowing all traffic from any source.

After updating the settings and refreshing the EFS dialog, remove any default security groups and proceed with volume creation.

The image shows a dialog box for adding a volume in an AWS interface, with options for configuring volume type, file system ID, access point ID, and other settings.


5. Mounting the EFS Volume in the MongoDB Container

Associate the EFS volume with your MongoDB container by performing the following:

  1. Edit the MongoDB container’s settings.

  2. In the "Storage and Logging" section, select Mount Points.

  3. Choose the "mongo-db" volume and set the container path to /data/db as recommended by MongoDB.

Example container settings snippet:

Command: echo,hello world
Environment variables:
  MONGO_INITDB_ROOT_PASSWORD: password
  MONGO_INITDB_ROOT_USERNAME: mongo

Click Update and then create your task definition. You may notice the task definition revision incrementing (e.g., revision 4).

The image shows an AWS interface for configuring log router integration and volume settings, including options for FireLens integration and EFS volume details.


6. Creating the ECS Service and Configuring the Load Balancer

Now that the task definition is complete, create a new ECS service:

  1. Navigate to your ECS cluster (e.g., "cluster one") and create a service.

  2. Select Fargate as the launch type and Linux as the operating system.

  3. Choose the newly created task definition (e.g., "ECS-Project2") and assign a service name (for instance, "notes app service").

  4. Set the number of tasks to 1.

  5. Select the appropriate VPC, subnets, and ECS security group (e.g., "ECS-SG").

The image shows a configuration screen for setting up security groups in AWS, displaying options for selecting existing security groups and their inbound rules.

Configuring the Application Load Balancer

To distribute traffic and enhance scalability, set up an ALB:

  1. Click the link to open the ALB configuration in a new tab.

  2. Choose the Application Load Balancer type, set it as internet-facing, and select IPv4.

  3. Map the ALB to the appropriate VPC.

  4. Create a new security group for the ALB (e.g., "LB-SG").

The image shows a webpage from AWS describing three types of load balancers: Application Load Balancer, Network Load Balancer, and Gateway Load Balancer, each with brief descriptions and icons.

Configure the ALB with the following details:

  • Name it appropriately (e.g., "notes LB").

  • Set it as internet facing with IPv4.

  • Configure inbound rules to allow HTTP traffic on port 80. Although the container listens on port 3000, the ALB will forward requests to port 3000 using listener rules.

The image shows a configuration page for creating an Application Load Balancer on AWS, with fields for load balancer name, scheme, IP address type, and network mapping.

If any default security groups cause issues, create a custom ALB security group with the proper rules.

The image shows an AWS EC2 security group details page, displaying information about inbound rules, including an HTTP rule allowing traffic on port 80 from any IP address.

After confirming your ALB settings, create a target group by following these steps:

  • Select IP addresses as the target type.

  • Name the target group (e.g., "notes-tg1") and choose the correct VPC.

  • Modify the health check configuration: update the health check path to /notes and override the health check port to 3000 if required.

The image shows an AWS console interface for specifying group details in a target group setup, with options for choosing a target type such as instances, IP addresses, Lambda functions, or an application load balancer.

The image shows an AWS console interface for configuring a target group, including settings for IP address type, VPC, protocol version, and health checks. The health check path is set to "/healthcheck".

Return to the load balancer configuration and associate the new target group (notes-tg1) to the listener on port 80, ensuring traffic is forwarded to port 3000.

The image shows a configuration screen for setting up a listener and routing rules in an AWS service, with options for security groups, protocols, and add-on services.

Review all final settings, disable auto scaling if not necessary, and create the ECS service.

The image shows a configuration screen for setting up a load balancer in AWS, detailing settings like load balancer name, production listener port, and health check path.

After the service is created, verify that at least one task transitions from provisioning to running. You can check the status by clicking the running task and confirming that both the "mongo" and "notes-api" containers operate as expected.

The image shows an AWS ECS service launch status page indicating that a service named "notes-app-service" has been successfully created. It also mentions additional integrations like Code Pipeline for setting up a CI/CD process.

The image shows an Amazon ECS (Elastic Container Service) dashboard displaying details of a running task, including cluster information, network settings, and container statuses. Two containers, "mongo" and "notes-api," are listed as running.


7. Testing the Application

Once your service is running, test your application using tools like Postman:

  1. Send a GET request to the container's IP on port 3000 at the /notes endpoint. The initial response should be:

     {
       "notes": []
     }
    
  2. Send a POST request with JSON data, such as:

     {
       "title": "second_note",
       "body": "remember to do dishes!!!!"
     }
    
  3. A subsequent GET request to /notes should display the newly created note:

     {
       "notes": [
         {
           "_id": "63211a3c034fdd55dec212834",
           "title": "second note",
           "body": "remember to do dishes!!!!",
           "__v": 0
         }
       ]
     }
    

Since you configured a load balancer, test the application using the ALB’s DNS name. For example, navigate to:

The image shows an AWS EC2 dashboard displaying details of a load balancer named "notes-lb," which is active and configured as an internet-facing application.

Ensure you do not include port 3000 in the URL because the ALB listens on port 80 and forwards the requests to port 3000 through the target group. For example:

GET http://notes-lb-1123119306.us-east-1.elb.amazonaws.com/


{
  "notes": []
}

After a POST request adds a note, a subsequent GET should display the updated results.


8. Enhancing Security Group Rules

For improved security, update your ECS security group rules:

  1. Instead of allowing all IP addresses on any port, remove the overly permissive rule.

  2. Add a custom TCP rule on port 3000 with the source set to your load balancer's security group. This ensures that only traffic routed through the ALB reaches your ECS containers.

The image shows an AWS Security Group configuration page, detailing inbound rules for a specific security group with all traffic allowed from any source.


In this article, we successfully deployed a multi-container application on Amazon ECS using Fargate. This deployment includes a MongoDB database with persistent EFS storage, an Express-based web API, an Application Load Balancer for efficient traffic distribution, and tightened security through proper security group configurations.

For more detailed information on ECS and related services, please refer to the following resources:

Happy deploying!

Conclusion

In this lesson, we covered the essential steps to deploy an application on Amazon Web Services (AWS) using Elastic Container Service (AWS ECS). This guide offers a comprehensive starting point for deploying containerized applications on AWS, ensuring you have a clear path to success.

Insight

If AWS ECS isn't your preferred method for container deployment, consider exploring AWS EKS for a managed Kubernetes solution that might better suit your needs.

We value your feedback! Please share your thoughts in the comments below and let us know if ECS is your go-to solution or if you prefer other alternatives.

Thank you for joining this lesson. We look forward to seeing you in the next session!

0
Subscribe to my newsletter

Read articles from Arindam Baidya directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Arindam Baidya
Arindam Baidya

🚀 Aspiring DevOps & Cloud Engineer | Passionate about Automation, CI/CD, Containers, and Cloud Infrastructure ☁️ I work with Docker, Kubernetes, Jenkins, Terraform, AWS (IAM & S3), Linux, Shell Scripting, and Git to build efficient, scalable, and secure systems. Currently contributing to DevOps-driven projects at Assurex e-Consultant while continuously expanding my skills through hands-on cloud and automation projects. Sharing my learning journey, projects, and tutorials on DevOps, AWS, and cloud technologies to help others grow in their tech careers. 💡 Let’s learn, build, and innovate together!