Docker for Node.js Developers: A DevOps Guide to Efficient Deployment

Introduction

We're working on "Docker for Node.js Developers: A DevOps Guide to Efficient Deployment". We'll explore how to master containerization with Docker and transform our Node.js application deployment. We'll learn to optimize workflows, manage dependencies, and deploy to cloud platforms, reducing errors and increasing efficiency. We'll get our application to market faster with Docker.

Creating custom image and Docker file

Image description

1 . At first we initialize the node, npm i -y 2 . Install express, npm i express 3 . create a app.js file for server,

Image description

const express = require("express")

const app = express()

app.get("/", (req, res) => {
    res.send("<h2> Hi there , I am Tahzib!!! wait there i am  comming </h2>")
})


const port = process.env.PORT || 3000;

app.listen(port, ()=> console.log(`Listening on port ${port}`));

4 . Create a Dockerfile,

FROM node:22.3-alpine3.19

WORKDIR /app

COPY package.json .  

#Here the (.) means the current directory it is same as /app Here package.json file will be copied to /app directory

RUN npm install


#Why we are copying the whole directory to app directory and whats the purpose of copying package.json separately?

#Its beacuse Docker image works in layers, It means FROM is a layer and WORKDIR is a layer and RUN is a layer,
# And all the layer is cached, so that if COPY package.json is not changed then it will skip this layer , same for RUN command

# But in last copy which copy all the files of the directory becuse we migt need to change anything in the application .

#The summary is , copying twice will give optimization When it will see there is no change in package.json then it will not copy it and comes to RUN and same for RUN and it will then come to other files and only copy other files.


#---------------------      docker ignore -------------------

#Here copy will copy all the files we have in . directory but we dont need the node_modules , same way there might be lots of files and folder we don't need. So That we can use docker ignore file which will help us to ignore the unneccessary files.

COPY . ./

#Its the port number where the server will run

ENV PORT 3000
EXPOSE $PORT


#Entry Point for the container
CMD ["npm", "run", "dev"]

# In the context of the dev script in your package.json file, the command nodemon -L app.js will start nodemon in legacy watching mode, watching for changes to the app.js file and restarting the application when changes are detected.

#CMD [ "nodemon", "app"]


# i think this might work for global nodemon CMD [ "nodemon", "app.js"]



#NOTE: docker run -v pathtofolderlocation:pathtofolercontainer -d --name node-app -p 3000:3000  node-app-image

#docker run -v D:\BackendDevelopment\NodeDocker:/app -d --name node-app -p 3000:3000  node-app-image

Image description

5 . Now we a .dockerignore file. The dockerignore file has the ability to avoid such list of files that we does not need to copy while we built the image.

Image description

Here, we have some file list like, node_modules, Dockerfile, .dockerignore, .git and .gitignore. We might have to fill more later to ignore the files that we don't need to copy in our container.

6 . We need to add the nodemon for continuous refresh,

Image description

Image description

In package.json, we need modify the scripts. we have to add the start and dev according to our imgae.

7 . Next we have to built the image, by giving,

  • docker build -t node-app-image .

  • Let's break down the docker build command:

  1. docker build: This is the Docker command to build a Docker image from a Dockerfile.

  2. -t node-app-image: This option specifies the tag or name for the resulting Docker image. In this case, the image will be named node-app-image. The -t option is short for --tag.

  3. .: This is the build context, which is the directory that contains the Dockerfile and any other files required for the build process. The dot (.) represents the current working directory, which is D:\BackendDevelopment\NodeDocker in this case.

So, when you run the command docker build -t node-app-image., Docker will:

Look for a file named Dockerfile in the current directory (D:\BackendDevelopment\NodeDocker). Read the instructions in the Dockerfile to build the Docker image. Create a new Docker image with the name node-app-image. Use the files in the current directory as the build context, which means Docker will copy the files into the image during the build process. In summary, this command tells Docker to build a new image named node-app-image using the instructions in the Dockerfile located in the current directory (D:\BackendDevelopment\NodeDocker).

8 . We check the image by giving, docker imgaes

Create Container

  1. docker run -v ${pwd}:/app:ro -v /app/node_modules --env PORT=4000 -d --name node-app -p 3000:4000 node-app-image

docker run: This command is used to run a Docker container from a Docker image.

-v ${pwd}:/app:ro: This option mounts a volume from the host machine to the container. Here's what it does:

${pwd} is a variable that represents the current working directory (in this case, D:\BackendDevelopment\NodeDocker). :/app specifies the directory in the container where the volume will be mounted. In this case, it's /app. :ro means the volume is mounted as read-only. This means that the container can read files from the host machine, but cannot modify them. So, this option mounts the current working directory on the host machine to the /app directory in the container, allowing the container to access the files in the current directory, but not modify them.

-v /app/node_modules: This option mounts another volume, but this time, it's a "virtual" volume that allows the container to persist data even after the container is restarted or deleted. This is useful for storing dependencies installed by npm or yarn, so that they don't need to be reinstalled every time the container is restarted.

--env-file ./.env: This option tells Docker to load environment variables from a file named .env in the current directory. The .env file typically contains sensitive information such as database credentials, API keys, or other configuration settings. By loading these variables from a file, you can keep them separate from your code and avoid hardcoding them.

-d: This option tells Docker to run the container in detached mode, which means the container will run in the background, and you won't see its output in the terminal.

--name node-app: This option gives the container a name, node-app, which can be used to reference the container in other Docker commands.

-p 3000:4000: This option maps a port from the host machine to a port in the container. In this case, it maps port 3000 on the host machine to port 4000 in the container. This allows you to access the application running inside the container by visiting http://localhost:3000 in your browser.

node-app-image: This is the name of the Docker image that the container will be created from.

So, when you run this command, Docker will:

Create a new container from the node-app-image image. Mount the current working directory on the host machine to the /app directory in the container, allowing the container to access the files in the current directory. Mount a virtual volume at /app/node_modules to persist dependencies installed by npm or yarn. Load environment variables from the .env file in the current directory. Run the container in detached mode. Give the container the name node-app. Map port 3000 on the host machine to port 4000 in the container. This command sets up a Node.js application to run in a Docker container, with the application code mounted from the host machine, environment variables loaded from a file, and the ability to access the application from the host machine via port 3000.


Deleting State Volumes

  1. docker volume ls , will show all the volumes that we create.

Image description

2 . We can delete them using,

  • docker volume rm <id>

  • docker volume prune Or we can delete them while removing the container. For that we can use: docker rm node-app -fv

Docker Compose

So, we can see our command goes big for running a single container. And when we move to multiple container then it is very difficult to run such big commands and it is very easy to fall in typos. To solve that we have docker compose.

Dockerfile vs Docker compose

We need a Dockerfile to create a Docker image, which is a packaged version of our application. The Dockerfile defines how to build the image, including the dependencies and configuration required to run our application.

We need Docker Compose to define and run a multi-container application, which consists of multiple services that work together. Docker Compose makes it easy to manage the dependencies between services, scale individual services, and restart services when they fail.

At first we need a file named docker-compose.yaml. The file must have the (.yaml) extension.

    services:
      node-app:
        build: . 
        ports:
          - "3000:3000"
        volumes:
          - ./:/app
          - /app/node_modules
        environment:
          - PORT=3000

Ignore the preview, because i docker compose the indentation is very important, we use 4 space in every indentation. Follow the image,

Image description

And now the moment of truth, give command,

  • docker compose up

Image description

  • We can see the image and running container,

Image description

  • To remove the container we can simply give ,docker compose down. But it will keep the volume data. To remove the volume also we need to give , docker compose down -v

Image description

1 . So we have one problem here, Though we can down the docker compose but when give the docker compose up -d , -d for detached mode and the 2nd time when we gave this then it will add the changes because docker compose is so dumb. To keep the changes and create the new image we have to give,

Image description

  • We change the port: 4000, docker compose up -d --build

Image description

Image description

  • Again we give the port to 3000 and build the image,

Image description

Actually volume keeps the track of the files from them localhost to containers.


Development vs Production configs

Till now we just use the docker for our development but the production does not work that way. We can't simply deploy what we have edited. We show the users the clean portion. For that we simply use multiple docker compose file. Different files for different use.

Image description

3 new files are created,

1 . docker-compose.dev.yml ,

Image description

services:
  node-app:
    volumes:
      - ./:/app
      - /app/node_modules
    environment:
      - NODE_ENV=development
    command: npm run dev

2 . docker-compose.prod.yml,

services:
  node-app:
    environment:
      - NODE_ENV=production
    command: node app.js

3 . docker-compose.yml,

services:
  node-app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - PORT=3000

and kept the previous docker compose file as docker-compose.bacup.yml.

  • The most important , we have to use separate commands for production and development,

Dev: docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d --build

Production: docker compose -f docker-compose.yml -f docker-compose.prod.yml up -d -build

Image description

Image description

We can see lots of docker-compose files which are not needed in the container. We can easily remove them by inserting them to .dockerignore file.

Image description

[ means any file that matches the previous portion ]


But We have to solve another issue, Now the RUN command in Dockerfile is running for both development and production while building and for both condition we are running npm install, which is not needed for production. To, solve this issue we need some changes in our Docker file.

Image description

New RUN command, with $NODE_ENV as variable and it will be received as argument. So we set it as argument variable. and we remove the previous RUN command. And new RUN command is shell scripting code.

ARG NODE_ENV

RUN if [ "$NODE_ENV" = "development" ]; \
        then npm install; \
        else npm install --only=production; \
        fi

2 . Next we change in docker-compse.dev.yml file,

Image description

added build , context and args.

3 . Same for docker-compose.prod.yml,

Image description

And the terminal commands are same for production and development.

Image description

But we don't need the --build any more because we add that to the compose file.

##Adding a mongo container

Image description

mongo:
    image: mongo
    restart: always
    environment:
      - MONGO_INITDB_ROOT_USERNAME=admin
      - MONGO_INITDB_ROOT_PASSWORD=password
  • Added this mongo service, and run this command to termina, docker compose -f docker-compose.yml -f docker-compose.dev.yml up -d .

  • Then open in shell command mode, giving , docker exec -it devopswithnodejs-mongo-1 sh

  • Now give, mongosh -u "amidn" -p "password"

And you are successfully logged into the mongodb server.

Image description

You can check the database using mongodb database commands.

Keep in mind from now, we can not use -v tag for docker compose down , because the mongo volume is now persistent .

##Communication between containers

  1. Install mongoose , npm i mongoose

  2. Down all the containers and build again because of new package,

Image description

3 . Now we are going to build the image again,

Image description

Next is connect the database,

4 .

Image description

in the place of admin: use the user name we gave for db

  • After admin, we have : and then we have to give the password that we gave for mongo. Then the IP address of the mongo container. But Here comes the network thing. Because both mongo and nodejs are in same network so we don't need the ip we can use the image name.

  • To check the network list

Image description

  • Here our network name is devopswithnodejs_default. And to inspect it we can give, docker network inspect devopswithnodejs_default

Image description

Adding config.js

It is not a mandatory part ,

  1. Create config directory and config.js file.

Image description

inside that give,

module.exports = {
    MONGO_IP: process.env.MONGO_IP || "mongo",
    MONGO_PORT: process.env.MONGO_PORT || 27017,
    MONGO_USER: process.env.MONGO_USER,
    MONGO_PASSWORD: process.env.MONGO_PASSWORD
}

2 . Use them on app.js,

Image description

const express = require("express")
const mongoose = require( "mongoose");
const { MONGO_USER, MONGO_PASSWORD, MONGO_IP, MONGO_PORT } = require("./config/config");

const app = express()

mongoose.connect(`mongodb://${MONGO_USER}:${MONGO_PASSWORD}@${MONGO_IP}:${MONGO_PORT}/?authsource=admin`).then(()=> console.log("Successfully connected to dB")).catch((e) => console.log("Error is: ", e));

app.get("/", (req, res) => {
    res.send("<h2> Hi....., I am Tahzib, wait there i am  comming </h2>")
})

const port = process.env.PORT || 3000;

app.listen(port, ()=> console.log(`Listening on port ${port}`));

And keep in mind to use, docker compose down and docker compose up.

Image description

to check and verify the connection,

Image description

Yo may see some warnings for mongo connection. These are for using older version of mongo. If you face the problem use AI, how to add the dependencies to mongo. And you are done.

##Container bootup order

We have two containers one node and other is mongo. Here, if the node container runs first then it will not find the mongo container and which will cause error. So that, we use dependencies on docker-compose file which will give us the order.

Image description

depends_on: # It neans it dpends to mongo container , so the node container will run after the mongo contaner.
      - mongo

and for the safety we can use a function which will try to connect the container in every 5se,

Image description

const connectWithRetry = () => {
    mongoose.connect(`mongodb://${MONGO_USER}:${MONGO_PASSWORD}@${MONGO_IP}:${MONGO_PORT}/?authsource=admin`).then(()=> console.log("Successfully connected to dB")).catch((e) => { console.log("Error is: ", e);
        setTimeout(connectWithRetry, 5000);
    })
}

connectWithRetry();

and for connection check we can give, docker logs devopswithnodejs-node-app-1 -f

Image description

Building a CRUD application

  1. Added these 2 lines to app.js,
  • const postRouter = require("./routes/postRoutes")

  • app.use(express.json())

Image description

2 . Now we are going to create 3 folders and inside them 3 files. Like: folder->file,

  • controllers->postController.js

  • models->postModels.js

  • routes->postroutes.js

Image description

Here are the file data,

postController.js

const Post = require("../models/postModel")

exports.getAllPosts = async (req, res, next) => {
    try {
        const posts = await Post.find();

        res.status(200).json({
            status: "success",
            results: posts.length,
            data: {
                posts,
            },
        })
    } catch(e) {
        res.status(400).json({
            status: "fail",
        })
    }
}

//localhost:3000/posts/:id

exports.getOnePost = async(req, res, next) => {
    try {
        const post = await Post.findById(req.params.id);

        res.status(200).json({
            status: "success",
            data: {
                post,
            },
        })
    } catch(e) {
        res.status(400).json({
            status: "fail",
        })
    }
}


exports.createPost = async(req, res, next) => {
    try {
        const post = await Post.create(req.body);

        res.status(200).json({
            status: "success",
            data: {
                post,
            },
        })
    } catch(e) {
        res.status(400).json({
            status: "fail",
        })
    }
}


exports.updatePost = async(req, res, next) => {
    try {
        const post = await Post.findByIdAndUpdate(req.params.id, req.body, { new: true, runValidators: true,});

        res.status(200).json({
            status: "success",
            data: {
                post,
            },
        })
    } catch(e) {
        res.status(400).json({
            status: "fail",
        })
    }
}


exports.deletePost = async(req, res, next) => {
    try {
        const post = await Post.findByIdAndDelete(req.params.id);

        res.status(200).json({
            status: "success",
        })
    } catch(e) {
        res.status(400).json({
            status: "fail",
        })
    }
}

postModel.js

const mongoose = require("mongoose");

const postSchema = new mongoose.Schema({
    title: {
        type: String,
        require: [true, "Post must have title"],
    },
    body: {
        type: String,
        required: [true, "post must have body"],
    }
})

const Post = mongoose.model("Post", postSchema)

module.exports = Post;

postRoutes.js

const express = require("express")

const postController = require("../controllers/postController");

const router = express.Router();

router
    .route("/")
    .get(postController.getAllPosts)
    .post(postController.createPost)

router
    .route("/:id")
    .get(postController.getOnePost)
    .patch(postController.updatePost)
    .delete(postController.deletePost)

module.exports = router;

##Inserting data to database

We are going to use Postman to insert data.

Image description

2 . Use GET method to see all the data using the same api

Image description

Sign up and Login

  1. At first we are going to create a user model in models directory. Like: models->userModel.js

Image description

const  mongoose = require("mongoose")

const userSchema = new mongoose.Schema({
    username: {
        type: String,
        require: [true, 'User must have a user name'],
        unique: true,
    },
    password: {
        type: String,
        require: [true, 'User must have a password'],
    },
})

const User = mongoose.model("User", userSchema)

module.exports = User

2 . Now we create a controller for it, it is authController.js in controllers directory.

Image description

const User = require("../models/userModel")

exports.signUp = async(req, res) => {
    try{
        const newUser = await User.create(req.body)
        res.status(201).json({
            status: "success",
            data: {
                user: newUser,
            },
        })
    }catch(e){
        res.status(400).json({
            status: "fail"
        })
    }
}

3 . Now we need the routes. So we create userRoutes.js in routes directory.

Image description

const express = require("express")

const authController = require("../controllers/authController")

const router = express.Router()

router.post("/signup", authController.signUp)

module.exports = router;

4 . Last we have to add the middleware in app.js .

Image description

  • const userRouter = require("./routes/userRoutes")

  • app.use("/api/v1/users", userRouter)

And we are done, we can check it on postman.

API,

Image description

Image description

  • Body:
{
    "username": "Tahzib",
    "password": "password"
}

Till now our password is saved as plain text. To encrypt it we are going to use bcrypt package.

  • npm i bcrypt

Image description

NOTE: use bcrypt

Now make the docker compose down and up again with build.

Image description

Image description

I officially end my backend journey here, time to explore other tech stacks. Goodbye, and sorry for sudden death of this post.

0
Subscribe to my newsletter

Read articles from Tahzib Mahmud Rifat directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Tahzib Mahmud Rifat
Tahzib Mahmud Rifat

Undergrad Student , interested in Ai, robotics , Internet of Things. Love to contribute in opensource projects