How to deploy Next.js to EC2 Ubuntu Linux

Mahad AhmedMahad Ahmed
7 min read

How to Setup EC2 for Dockerized Next.js Deployment

Introduction

Next.js is great serve-side oriented frontend framework that makes building SEO-friendly websites easier.

Deploying it is trickier than it needs to be, I understand Vercel needs to make money off of their creation. If you need an alternative way to deploy your website, dockerizing it will remove few of the headaches.

In this example, I'm using AWS EC2 for deployment but most of the steps would work on any other Linux VPS (Like Linode and etc)

Prerequisites

  • AWS EC2 instance or any Linux server through SSH

  • Basic understanding of:

    • Docker

    • Next.js

    • AWS EC2, make sure you have .pem file for authentication over SSH

    • Command-line interface

    • SSH client

Step 1: Prepare Your Next.js Application

Create Dockerfile for Next.js application with the following content:

FROM node:18-alpine AS base

# install dependencies only when needed
FROM base AS deps
# Check https://github.com/nodejs/docker-node/tree/b4117f9333da4138b03a546ec926ef50a31506c3#nodealpine to understand why libc6-compat might be needed.
RUN apk add --no-cache libc6-compat
WORKDIR /app

# install dependencies based on the preferred package manager
COPY package.json yarn.lock* package-lock.json* pnpm-lock.yaml* ./
RUN \
    if [ -f yarn.lock ]; then yarn --frozen-lockfile; \
    elif [ -f package-lock.json ]; then npm ci; \
    elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm i --frozen-lockfile; \
    else echo "Lockfile not found." && exit 1; \
    fi


# rebuild the source code only when needed
FROM base AS builder
WORKDIR /app
COPY --from=deps /app/node_modules ./node_modules
COPY . .

RUN \
    if [ -f yarn.lock ]; then yarn run build; \
    elif [ -f package-lock.json ]; then npm run build; \
    elif [ -f pnpm-lock.yaml ]; then corepack enable pnpm && pnpm run build; \
    else echo "Lockfile not found." && exit 1; \
    fi

# Production image, copy all the files and run next
FROM base AS runner
WORKDIR /app

ENV NODE_ENV=production

RUN addgroup --system --gid 1001 nodejs
RUN adduser --system --uid 1001 nextjs

COPY --from=builder /app/public ./public

# Set the correct permission for prerender cache
RUN mkdir .next
RUN chown nextjs:nodejs .next
RUN chmod -R +x ./

# Automatically leverage output traces to reduce image size
# https://nextjs.org/docs/advanced-features/output-file-tracing
COPY --from=builder --chown=nextjs:nodejs /app/.next/standalone ./
COPY --from=builder --chown=nextjs:nodejs /app/.next/static ./.next/static

USER nextjs

EXPOSE 3000

ENV PORT=3000

CMD ["node", "server.js"]

Step 2: Configure and Install Docker

Let's connect to the EC2 instance using SSH.

# connect through ssh
ssh -i your-key-file.pem ec2-user2@ec2-YOUR-PUBLIC-IP.us-east-1.compute.amazonaws.com

Install tools

# update the packages before installing
sudo apt update

# install prerequisite packages
sudo apt install -y apt-transport-https ca-certificates curl software-properties-common

# add the GPG key for the official Docker repo
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

# add repo to APT sources
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

# update packages again
sudo apt update

# reset apt cache to use the official repo instead of ubuntu sources
apt-cache policy docker-ce

# start the actual installation
sudo apt install -y docker-ce

# to verify the installation was successful, try
docker -v # prints the installed docker's version details

# check if the docker service is running
sudo systemctl status docker.service

# if not running, start it
sudo systemctl start docker.service

In case you hate prefixing every docker command with sudo you can do the following:

sudo usermod -aG docker $USER

# run this to refresh the current terminal session
su - $USER

Step 3: Deploy Docker Image to Docker hub or Github containers

To deploy the image of your dockerized project, you need to push it to a container repo like Docker Hub or Github containers. In this example, I'm using Github containers.

First we need to authenticate our local docker with our Github containers repo. We need an API token to use as password with docker login.

  • Go to Gihtub.com Click on your profile picture on the top right.

  • Click Settings

  • Click Developer settings at the bottom of the left sidebar

  • Click Personal access tokens to expand

  • Click Tokens (classic)

  • Click Generate new token dropdown

  • Click Generate new token (classic) option

  • Type the name of your project in the Node input

  • Check write:packages and delete:packages checkboxes

  • Finally click Generate token button near the bottom to complete this

sudo docker login ghcr.io # then give your github 
Username: your-github-username
Password: api-token-generated-above

Note: the generated API key is only shown once, so you need to keep it in a safe place to use it later to login to your Github container repository.

Let's create the images to deploy to our container repo we setup above.

# the --platform linux/amd64 is needed in case your machine is arm based, like Apple M-series or Snapdragon laptops.
docker build --platform linux/amd64 -t ghcr.io/your-github-username/project-name:v0.0.1 .

To use the image locally, let's stop if there any previous image running:

docker stop my-container-name # skip if it's the first time
docker rm my-container-name # skip the first time

# to run
docker run --rm --name my-container-name -d -p 3000:3000 ghcr.io/your-github-username/project-name:v0.0.1

After testing and making sure we have the container working as expected, let's push it to Github so we can pull from our EC2 server.

docker push ghcr.io/your-github-username/project-name:v0.0.1

Let's connect to the EC2 thourgh SSH again if not still connected:

# connect through ssh
ssh -i your-key-file.pem ec2-user2@ec2-YOUR-PUBLIC-IP.us-east-1.compute.amazonaws.com

Let's pull the docker image from github. NOTE: you need to login using the above docker login step and use the same username and API-token as the password.

sudo docker pull ghcr.io/your-github-username/project-name:v0.0.1

Then let's run a container from the docker image we downloaded.

sudo docker run --rm --name my-container-name -d -p 3000:3000 ghcr.io/your-github-username/project-name:v0.0.1

Step 4: Implement Production Optimizations

Setup NGINX

If you don't already have nginx installed, let’s quickly do that

sudo apt update
sudo apt install -y nginx

Let's create a config file for our site for nginx to the HTTP handle requests

sudo touch /etc/nginx/sites-available/my-site-domain.com
sudo nano /etc/nginx/sites-available/my-site-domain.com

Then copy the following content in the file you just opened.

server {
    listen 80 default_server;
    listen [::]:80 default_server;

    server_name _  www.my-site-domain.com.com my-site-domain.com.com;
    error_log /var/log/nginx/error.log warn;
    access_log /var/log/nginx/access.log;

    # root /var/www/html;
    root /var/www/html;

    location / {
          proxy_pass      http://127.0.0.1:3000;
          proxy_set_header Host $http_host;
          proxy_redirect off;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          # try_files $uri $uri/ =404;
    }
}

We need to link the config file to a file in sites-enabled folder. Creating a link file helps you keep the config file and be able to enable/disable that config by creating or removing the link file, while keeping the actual config file in sites-available.

sudo ln -s /etc/nginx/sites-available/my-site-domain.com /etc/nginx/sites-enabled/my-site-domain.com

Generate SSL Certificates with Certbot

There are other ways to setup SSL on AWS but I like a portable solution that can be used on any Linux VPS. So, we're going to use Let's Encrypt Certbot tool to generate SSL certificates.

# install certbot
sudo snap install --classic certbot

# create a link for the certbot binary
sudo ln -s /snap/bin/certbot /usr/bin/certbot

# let's run the certbot command
sudo certbot --nginx

Common Troubleshooting

The issue I always find is when forget to prefix sudo in docker commands, so if you didn't add docker user to the sudo group, you might get an issue like

permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock:

The error doesn't give a suggestion to why this is happening so you might need to run the command again with sudo or follow last part of installing docker above.

Conclusion

Deploying Next.js apps isn't hard as we showed above but there are no clear guides available. This was the only way that worked for me and I thought I have to share with the community as it might save you sometime. If you have any suggestions of how we can improve the above guide, please feel free to share them.

Additional Resources

0
Subscribe to my newsletter

Read articles from Mahad Ahmed directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Mahad Ahmed
Mahad Ahmed

Mahad loves building mobile and web applications and is here to take you on a journey, filled with bad decisions and learning from mistakes, through this blog.