How to use ECR and ECS to deploy your application

Brief Intro
Elastic Container Registry (ECR): ECR is a fully managed Docker container registry service provided by Amazon Web Services (AWS). ECR allows you to store, manage, and deploy Docker container images, making it easier for developers to work with containerized applications.
Key features of ECR include:
Private repositories: You can create private repositories to store your container images securely.
Integration with ECS and EKS: ECR is fully integrated with Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS), making it seamless to deploy containers from ECR to these services.
Scalability: As a managed service, ECR scales automatically to accommodate the growth of your container image repository.
Elastic Container Service (ECS): ECS is a fully managed container orchestration service provided by Amazon Web Services (AWS) that allows you to easily run, manage, and scale Docker containers on AWS infrastructure.
ECS is a service that runs and manages Docker containers for you on AWS.
You give it: a Docker image (from ECR or Docker Hub) + instructions (how many copies, what ports, etc.).
It gives you: running containers, load balancing, scaling, and health checks — without you having to manually start containers on servers.
Two ways ECS can run your containers:
ECS with EC2 – You manage the EC2 instances (servers) where containers run.
ECS with Fargate – AWS runs containers without you touching any servers (serverless for containers).
In this blog we are going to use Fargate to start our containers using ECS.
Step 1 : Containerize the app
Start with creating a DOCKERFILE in root folder for your app, so that you can build an image from it and deploy to the ECR.
For reference, you can clone the sample project from the github repo that i created to get started:
Command to build a docker image (Optional):
docker build -t node-app .
You can also start a container from this image locally (Optional) :
docker run -p 3000:3000 node-app
Step 2 : Create an ECR Repository
Go to AWS ECR console.
Create a new private repository.
Give your repository a name and select whether you want it to be mutable or immutable.
Click on create to create a repositroy.
Step 2 : Create an IAM User for CLI Access
IAM stand for Identity and Access Management. It is a service in AWS that helps you manage who can access your AWS resources and what they can do.
Think of IAM Users as "accounts" for people or machines. You can give them specific permissions (like only read S3, or full admin). IAM ensures security by least privilege (only give the access needed, nothing extra).
Go to the IAM console on AWS.
In the side bar click on the User.
Click on create user.
Specify the user details.
You only want your local machines CLI to talk to AWS, therefore skip selecting the checkbox to give access to AWS management console.
Click next.
In the set permission section, select the Attach policies directly option to create a new permission policy for the user that you are creaing.
In the permission policies, check the option of AamazonEC2ContainerRegistryFullAccess . This will give user only access to push and pull the repo from our ECR.
Click next and create user.
Step 3 : Create an Authentication Credentials for the User
In this step you will create an Access Key for the user.
This access key lets your AWS CLI or SDK authenticate as your IAM user.
Select the IAM user tha you created in step 2.
Click on the Security credentials tab.
Click create access key.
AWS will ask for the use case:
Choose Command Line Interface (CLI).
Acknowledge the warning (never share keys, rotate if compromised).
After creation, AWS will show you the Access Key ID and Secret Access Key.
Download the .csv file or copy the values securely.
The Secret Access Key is shown only once. If you lose it, you’ll need to create a new one.
Step 4 : Installation of AWS CLI on you Local Machine
The AWS CLI (Command Line Interface) is a tool that lets us interact with AWS services directly from our terminal, without always going to the AWS Console.
In this case this is usefull because:
Authenticate with AWS
- Using the IAM user’s Access Key + Secret Key, our local machine can talk securely to AWS.
Push Docker images to ECR
Before ECS can run our container, we need to build the Docker image locally and push it to ECR (Elastic Container Registry).
The AWS CLI is what authenticates Docker with ECR so the push works.
Installation Guide : https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html
If you’ve reached this point, there might be a good confusion in your mind: why are we even doing all of this? Why create IAM users, access keys, and install the AWS CLI on our local machine?
Because ECS and ECR are AWS services, they need a secure way to trust our local machine. Instead of using the Root user (which has unlimited access and is unsafe), we create a dedicated IAM user.
IAM gives us a dedicated user identity with just the right permissions. By assigning permissions to the IAM user, we control exactly what it can do (for example: only work with ECR and ECS).
Access keys are like a bridge between your local machine and AWS.
The AWS CLI is the tool that uses these keys to actually talk to AWS services (like pushing images to ECR).
Step 5 : Configuring AWS CLI
Prerequisite: You have installed AWS CLI on you local machine.
Run a command
aws configure
This will prompt you to enter your Access Key ID and Secret Key Access. That you create in IAM user security credentials. Enter the correct value.
For other feilds, you can keep them empty.
Now you AWS CLI on your local machine has access to your AWS IAM user.
Step 6 : Building and Pushing Image to AWS Private Repository
Authenticate docker with ECR
Go to the ECR console of AWS. Select the repository that you created.
In the top right corner click on view push commands tab.
A dialog box will appear with all the commands you need to run in your terminal. These commands include logging in to ECR, tagging your Docker image, and pushing it to the repository. Just copy those commands and run them from the root of your project.
What will these commands do ?
Login → Let Docker talk to ECR.
Tag → Point your image at the right ECR repo.
Push → Upload it so ECS can use it.
To this point you have successfully created an ECR repository and using the IAM user access you have pushed the image of your code to the ECR repository. Now before we start with how to deploy this image container on fargate using ECS, let’s take a step back and understand the architecture and some important definitions. This will help us see the bigger picture — how ECR, ECS, and other AWS components connect together — instead of just running commands blindly.
Elastic Container Service (ECS) Architecture
A Cluster is a logical grouping of resources used by ECS to manage and run containerized applications. A cluster is essentially a pool of computing resources (e.g., EC2 instances or Fargate) where your tasks (containers) run.
EC2 Cluster: When you use EC2 launch type, the cluster contains EC2 instances that are registered to ECS.
Fargate Cluster: When you use Fargate, the cluster is virtualized, and ECS manages the infrastructure for you (you don’t manage the underlying EC2 instances).
A Task is a running instance of a containerized application in ECS. It’s the basic unit of work in ECS. A task is defined by a Task Definition, which specifies the Docker image to use, resource requirements (like CPU and memory), networking configurations, environment variables, and more.
A Service is a higher-level abstraction on top of tasks in ECS. A service allows you to maintain and scale a specified number of task instances running and ensures that the desired number of tasks are continuously running.
ECS Architecture Made Easy
1. Cluster = The “kitchen”
It’s the place where all your containers will run.
In EC2 mode → you own the kitchen (servers).
In Fargate mode → AWS rents you the kitchen when needed.
Example:
A restaurant has 3 kitchens (cluster) to cook different dishes.
2. Task Definition = The “recipe card”
A written plan for how to run your container.
Includes: which Docker image to use, how much CPU/memory, and any settings.
Example:
Recipe card says: “Take my-pizza-image from ECR, use 1 chef, 500g of dough, cook at 250°C.”
3. Task = The “chef cooking a dish”
A running copy of your task definition.
One task = one chef using the recipe to make food (container).
Example:
If you want 3 pizzas at once, you run 3 tasks using the same recipe.
4. Service = The “restaurant manager”
Makes sure the correct number of tasks are always running.
If one chef quits (task stops), the manager hires another one instantly.
Example:
Manager says: “We must always have 3 pizzas cooking — no matter what!”
Flow Example
You store image in ECR (e.g., my-app:latest).
Create a Task Definition in ECS that uses my-app:latest.
Create a Service that says “run 3 copies of this task.”
Cluster launches these tasks (in EC2 or Fargate).
Load balancer (optional) routes traffic to the tasks.
Step 7 : Create a Cluster
Go to the ECS console of AWS
Select the cluster from the dashboard and create a cluster.
Give your cluster a name.
Select Fargate from infrastructure, if you don’t want to manage servers.
Keep the other settings as it is and click on create cluster.
Step 8 : Create a Task Definition
In the left menu, click Task definitions → Create new task definition.
Give the task family name.
Select the Fargate as infrastructure.
For most modern use cases, choose Fargate.
You can keep other settings as it is.
Scrolling down to the Container-1
Give container a name
Image URI: Enter the ECR image URI you pushed earlier, e.g.:
<account_id>.dkr.ecr.<region>.amazonaws.com/my-app-repo:latest
Port mappings: map container port 3000
You can configure the resource allocation as per your project needs.
Click on create.
Step 9 : Create a Service
Go to the Clusters in ECS sidebar.
Select your cluster.
Inside the cluster, click Create → Create service.
Configure Service:
Task Definition: Select the one you created earlier.
Revision: Choose latest.
Service name: e.g., my-app-service.
Configure Environments:
Select Capacity Provider Strategy.
Capacity Provider: Fargate if only using fargate to start the tasks.
Configure Deployement configurations:
Scheduling strategy: Replica.
Desired Task: Enter the number of tasks that you want this service to start eg. 5.
Configure other settings as per your need.
Creating a Load Balancer and target group:
If it’s a web app for production, attach an Application Load Balancer (ALB).
If just testing, you can skip load balancer and use public IP.
Review settings → Click Create Service.
ECS will launch tasks in your cluster.
You can configure other settings also if required. But what we have done is good for running a simple apps.
Conclusion
We’ve now gone through the complete journey of deploying a containerized application on AWS:
ECR to securely store our Docker images.
IAM + AWS CLI to give our local machine the right permissions to interact with AWS.
ECS Cluster to provide the environment where containers will run.
Task Definition to define what container(s) to run and their configuration.
Service to tell ECS how to run and manage those containers at scale.
By connecting these pieces, we’ve moved from a local Docker image → to ECR repository → to a running ECS service.
I’ll be creating more blogs on topics around DevOps ann Backend Technologies. If you found this guide helpful, make sure to follow and like so you don’t miss the next one!
Subscribe to my newsletter
Read articles from Airaad directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Airaad
Airaad
Hi, Welcome to my blog 👋 I’m interested in exploring the world of software technologies, with a focus on backend development and DevOps. I enjoy working on scalable systems, automation, and the tools that improve reliability and efficiency in software engineering. On this blog, I share my journey, insights, and learnings around: Backend architectures & frameworks Databases & APIs CI/CD pipelines & automation Cloud platforms & containerization Monitoring, security, and DevOps best practices My aim is to document what I learn, break down complex concepts, and provide useful resources for others in the field.