Dockerizing a Full Stack Application
Table of contents
Docker has revolutionized the way we develop, deploy, and manage applications. It provides a platform-agnostic way to package and distribute applications, making it easier to move them between development, testing, and production environments. In this blog, we'll explore how to Dockerize a React application with Node.js, Postgres, and Nginx.
React is a popular JavaScript library for building user interfaces, while Node.js is a powerful runtime environment for building server-side applications. Postgres is a robust open-source relational database management system, and Nginx is a high-performance web server that can also function as a reverse proxy and load balancer.
By Dockerizing our application, we can ensure that it runs consistently across different environments, making it easier to deploy and scale. We'll use Docker Compose to define and manage our application's services, which will include a Node.js server, a Postgres database, and Nginx as reverse-proxy.
Throughout this blog, we'll cover the basics to advanced of Docker and Docker Compose, as well as how to configure and run our application in a Docker environment. We'll also explore some best practices for Dockerizing applications, such as using environment variables and managing dependencies.
So, whether you're new to Docker or an experienced developer looking to Dockerize your full-stack application, this blog will provide you with the knowledge and tools you need to get started. Let's dive in!
Prerequisites
Before proceeding, make sure that you have installed Node.js and Docker on your computer. The versions I utilized were Node.js 20.6.1 and Docker 24.0.5.
Building The Backend
First, we will create a backend server which exposes some routes and talks to the database.
Let’s start creating a new folder called Project(Feel free to name it whatever you want). Inside the Project folder, create a folder called node. Initialize a Node.js application inside the node folder using the following command:
npm init -y
This will create a package.json file for you where we can add dependencies and The -y
flag skips all the questions.
Back-End dependencies:
Express
: A Node.js web application framework used to handle client requests to specific endpoints. For more information, refer to the Express Documentation.Node-Postgres
: A client for Node.js used to establish a connection with the PostgreSQL database. For more information, refer to the Node Postgres Documentation.Nodemon
: A tool that automatically restarts the Node.js application when file changes are detected. For more information, refer to the Nodemon Documentation.
To install these dependencies, navigate to the node folder and run the following command:
npm install express pg nodemon
This command will install all the required dependencies. The package manager downloads the packages and their dependencies and stores them in the node_modules folder. This folder contains all the dependencies and their sub-dependencies that our project requires to run.
Writing the back-end code
Create an index.js file, it will be our main file. Add the following codes inside it.
Start importing the required packages inside the file:
import pg from 'pg';
import express from 'express';
import bodyParser from 'body-parser';
At present, the database is not yet up and running, but we are already in the process of establishing a connection with the PostgreSQL database using node-postgres:
const { Client } = pg;
const client = new Client({
user: 'postgres',
host: 'db',
database: 'postgres',
password: '1234',
port: 5432,
});
client.connect();
Create the users table:
const createTable = async () => {
await client.query(`CREATE TABLE IF NOT EXISTS users
(id serial PRIMARY KEY, name VARCHAR (255) UNIQUE NOT NULL,
email VARCHAR (255) UNIQUE NOT NULL, age INT NOT NULL);`)
};
createTable();
Use Express and the middleware to parse the POST method:
const app = express();
app.use(express.json());
app.use(express.urlencoded({ extended: true }));
Add a Hello World route:
app.get('/api', (req, res) => res.send('Hello World!'));
Create a GET method to retrieve all users from the users table:
app.get('/api/all', async (req, res) => {
try {
const response = await client.query(`SELECT * FROM users`);
if(response){
res.status(200).send(response.rows);
}
} catch (error) {
res.status(500).send('Error');
console.log(error);
}
});
Create a POST method to insert users into the users table:
app.post('/api/form', async (req, res) => {
try {
const {name, email, age} = req.body;
const response = await client.query(`INSERT INTO users(name, email, age) VALUES ('${name}', '${email}', ${age});`);
if(response){
res.status(200).send(req.body);
}
} catch (error) {
res.status(500).send('Error');
console.log(error);
}
});
Finally, add a port that will expose the API when the server is running. Here, we expose it on port 3000.
app.listen(3000, () => console.log(`Server running on port 3000.`));
Now we have our index.js file ready. There are many other ways of doing and improving the code. We could handle errors better, improve the architecture with controllers, services, and repositories, and remove secret values from the code. However, as we are not focusing on these aspects, this is going to be our back-end as simple as possible.
Testing the routes
In the package.json file, inside the script section, add:
"start": "nodemon index.js"
In the package.json, before the script add the following command:
"type": "module"
The package.json file should look like this:
{
"name": "node",
"version": "1.0.0",
"description": "",
"main": "index.js",
"type": "module",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"start": "nodemon index.js"
},
"keywords": [],
"author": "",
"license": "ISC",
"dependencies": {
"express": "^4.18.2",
"nodemon": "^2.0.22",
"pg": "^8.11.0"
}
}
Now, to start the application, you can run npm start inside the node folder. However, it will crash because we still don’t have the database running.
[nodemon] starting `node index.js`
App running on port 3000.
node:internal/process/promises:288
triggerUncaughtException(err, true /* fromPromise */);
^
Error: getaddrinfo EAI_AGAIN db
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:107:26) {
errno: -3001,
code: 'EAI_AGAIN',
syscall: 'getaddrinfo',
hostname: 'db'
}
Node.js v18.13.0
[nodemon] app crashed - waiting for file changes before starting...The following code will create a new react project:
So, In order to avoid this error, just comment out the following lines:
//client.connect();
...
//createTable();
Now run npm start inside the node folder and access the hello world route (http://localhost:3000/api) in the browser to see if it is working.
After testing, uncomment the previous code!!!
Build the front-end
Let’s write the front-end logic to process the API endpoints defined above. Inside the Project directory run the following code.
npm create vite react -- --template react
The code will create a new react folder automatically and also a new react project.
I chose the Vite tool to create the project, as it is a lightweight tool that takes up 31 MB of dependencies, which will save time in starting a new project.Check more about Vite here, Vite Documentation.
Front-End dependencies:
Axios: A promise-based HTTP Client for node.js. For more information, refer to the Axios Documentation.
React Router: A routing library for React applications. It provides a set of components and utilities that allow you to define and manage the routing functionality in your React application. For more information, refer to the React Router Documentation.
To install all the dependencies, go inside the react folder and run the following code:
npm install axios react-router-dom
To run the application, you can execute:
npm run dev
Accessing the browser at http://localhost:5173, you will get the following page:
Writing the front-end code
First, let’s replace the contents of the App.jsx file with the following code:
import ReactDOM from "react-dom/client";
import { BrowserRouter, Routes, Route } from "react-router-dom";
import Layout from "./components/Layout";
import Home from "./components/Home";
import PostUser from "./components/PostUser";
import GetAllUser from "./components/GetAllUser";
export default function App() {
return (
<BrowserRouter>
<Routes>
<Route path="/" element={<Layout />}>
<Route index element={<Home />} />
<Route path="post" element={<PostUser />} />
<Route path="get" element={<GetAllUser />} />
</Route>
</Routes>
</BrowserRouter>
);
}
const root = ReactDOM.createRoot(document.getElementById('root'));
root.render(<App />);
The App file manages the routing and renders the specific component at the specific endpoint. For example, at the path=”/get” it will return the component GetAllUser, which is responsible for retrieving all the users from our database.
Now let’s create the application components. Create a folder named components inside the src folder and create the following four files:
GetAllUser.jsx
Home.jsx
Layout.jsx
PostUser.jsx
GetAllUser.jsx
import axios from "axios";
import { useEffect, useState } from "react";
const GetAllUser = () => {
const [users, setAllUser] = useState();
useEffect(() => {
axios
.get("http://localhost:8000/api/all")
.then((response) => setAllUser(response.data))
.catch((err) => {
console.error(err);
});
}, []);
return (
<>
<h1>All Users</h1>
<ul>
{users && users.map(user =>
<li key={user.id}>
<h3>ID: {user.id} </h3>
name: {user.name} <br></br>
age: {user.age} <br></br>
email: {user.email} <br></br>
</li>
)}
</ul>
</>
);
};
export default GetAllUser;
Home.jsx
const Home = () => {
return <h1>Home</h1>;
};
export default Home;
Layout.jsx
import { Outlet, Link } from "react-router-dom";
const Layout = () => {
return (
<>
<nav>
<ul>
<li>
<Link to="/">Home</Link>
</li>
<li>
<Link to="/post">Post User</Link>
</li>
<li>
<Link to="/get">Get All User</Link>
</li>
</ul>
</nav>
<Outlet />
</>
)
};
export default Layout;
PostUser.jsx
import axios from "axios";
import { useState } from "react";
const PostUser = () => {
const [user, setUser] = useState({
name: '',
age: '',
email: '',
})
const createUser = async () => {
await axios
.post("http://localhost:8000/api/form",
user,
{
headers: {
'Content-Type': 'application/x-www-form-urlencoded'
}
})
.then((response) => {
setUser({
name: '',
age: '',
email: '',
})
console.log(response)
return alert("User Created: " + `${JSON.stringify(response.data, null,4)}`);
})
.catch((err) => {
return alert(err);
});
}
const onChangeForm = (e) => {
setUser({
...user,
[e.target.name]: e.target.value
})
}
return (
<div >
<div>
<div>
<h1>Create User</h1>
<form>
<div>
<div>
<label>Name</label>
<input
type="text"
value={user.name}
onChange={(e) => onChangeForm(e)}
name="name"
id="name"
placeholder="Name"
/>
</div>
<div>
<label>Age</label>
<input
type="text"
value={user.age}
onChange={(e) => onChangeForm(e)}
name="age"
id="age"
placeholder="Age"
/>
</div>
</div>
<div>
<div>
<label htmlFor="exampleInputEmail1">Email</label>
<input
type="text"
value={user.email}
onChange={(e) => onChangeForm(e)}
name="email"
id="email"
placeholder="Email"
/>
</div>
</div>
<button type="button" onClick= {()=>createUser()}>Create</button>
</form>
</div>
</div>
</div>
);
};
export default PostUser;
Then finally run the application using command npm run dev
and then open your browser and go to http://localhost:5173
you will see something like the following image. If your content comes in center then just change the index.css
file (this has no relation with the logic this is just styling).
Finally, we have to change a server option in Vite, to listen on all addresses, go to the vite.config.js and change the file to the following:
import { defineConfig } from 'vite'
import react from '@vitejs/plugin-react'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [react()],
server: {
host: true,
//port: 5173, Port used by docker when not running docker compose.
}
})
Configuring the Nginx server
Nginx can be used as a reverse proxy server to handle requests from clients and forward them to the appropriate back-end server.
To configure Nginx as a reverse proxy, navigate to the project’s root directory and create an nginx folder. Inside this folder, create a file named default.conf and add the following configurations:
upstream front-end {
server front-end:5173;
}
upstream back-end {
server back-end:3000;
}
server {
listen 80;
location / {
proxy_pass http://front-end;
}
location /sockjs-node {
proxy_pass http://front-end;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
}
location /api {
rewrite /back-end/(.*) /$1 break;
proxy_pass http://back-end;
}
}
The upstream directive defines groups of servers that can be referenced by the proxy_pass directive. In this case, we have defined two upstreams: front-end for the React front-end server and back-end for the Node.js back-end server.
The server block listens on port 80 and contains the configuration for handling requests.
The location / block proxies requests to the front-end server using proxy_pass http://front-end; .
The location /sockjs-node block handles WebSocket connections and passes them to the front-end server.
The location /api block handles requests to the back-end API by rewriting the URL and passing them to the back-end server using proxy_pass http://back-end;.
Creating the Dockerfile
Front-End dockerfile
In the react folder, create a new file Dockerfile. Add the following code to the file:
FROM node:alpine
WORKDIR /usr/src/app
COPY . .
RUN npm install
EXPOSE 5173
The FROM keyword is used in a Dockerfile to specify the base image that will be used to build a new Docker image. In this case, we are using the node:alpine image as the base.
The WORKDIR instruction sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY, and ADD instructions.
The COPY . . instruction copies all the files from the local computer to the /usr/src/app directory in the Docker image.
The RUN npm install command installs the required dependencies for the React application.
The EXPOSE instruction specifies that the containerized application will listen on port 5173 for incoming connections.
Back-End Dockerfile
Create a file named Dockerfile in the node folder of the project and add the following code:
FROM node:alpine
WORKDIR /usr/src/app
COPY . .
RUN npm install
EXPOSE 3000
This Dockerfile is similar to the front-end Dockerfile. It sets the working directory, copies the files, installs the dependencies, and exposes port 3000 for the back-end application.
Nginx Dockerfile
Create a Dockerfile inside the nginx folder and add the following code to pull the Nginx image and copy the default.conf file:
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/default.conf
Configuring the docker-compose.yml
Now that we have finished configuring the client, server API linked to the Nginx server, and verified that everything is in order, it’s time to consolidate everything by using the docker-compose.yml file. This file will not only bring all the components together but also handle the setup for our database.
To begin, go to the root directory, specifically the Project folder, and create a file named docker-compose.yml.
Note: This is a yaml or yml file and in yml files indentation is very important so when you write this code make sure indentation is proper otherwise it will not work
version: '3'
services:
back-end:
build:
context: node
container_name: back-end
working_dir: /usr/src/app
networks:
- node-network
volumes:
- ./node:/usr/src/app
- /usr/src/app/node_modules
tty: true
ports:
- "3000:3000"
command: npm run start
depends_on:
- db
front-end:
build:
context: react
container_name: front-end
working_dir: /usr/src/app
networks:
- node-network
volumes:
- ./react:/usr/src/app
- /usr/src/app/node_modules
tty: true
ports:
- "5173:5173"
command: npm run dev
db:
image: postgres
container_name: db
restart: always
tty: true
volumes:
- ./data:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=1234
ports:
- "5432:5432"
networks:
- node-network
nginx:
build:
context: nginx
container_name: nginx
restart: always
tty: true
ports:
- "8000:80"
networks:
- node-network
depends_on:
- back-end
- front-end
networks:
node-network:
driver: bridge
Now let's understand each and every line of this docker-compose.yml file.
version: "3"
This specifies the version of the Docker Compose file format being used(OPTIONAL).
services:
This is where you define the different services that make up your application, services basically mean containers which you want to create.
back-end:
This is the name of the first service, which is called "back-end".
build:
context: node
This specifies that the service should be built using the Dockerfile in the "node" directory. This basically tells docker that for creating and running this container use base image as the image which we created using Dockerfile in ./node directory (Dockerfile is used to create your own images).
container_name: back-end
This sets the name of the container to "back-end".
working_dir: /usr/src/app
This sets the working directory inside the container to "/usr/src/app". The working_dir instruction sets the working directory for any subsequent RUN, CMD, ENTRYPOINT, COPY, and ADD instructions.
networks:
- node-network
This specifies that the service should be connected to the "node-network" network.(More on networks at the end of this blog)
tty: true
This allocates a pseudo-TTY for the container. A pseudo terminal (also known as a tty or a pts ) connects a user's “terminal” with the stdin and stdout stream, commonly (but not necessarily) through a shell such as bash . … In the case of docker, you'll often use -t and -i together when you run processes in interactive mode, such as when starting a bash shell.
volumes:
- ./node:/usr/src/app
- /usr/src/app/node_modules
This mounts the local "./node" directory to the "/usr/src/app" directory inside the container, and also mounts the "/usr/src/app/node_modules" directory as a volume.
When you mount the local "./node" directory to the "/usr/src/app" directory inside the container using a Docker volume, all the files in the "./node" directory on the host machine will be available inside the container at the "/usr/src/app" directory.
This means that any changes you make to the files in the "./node" directory on the host machine will be reflected inside the container, and vice versa. Additionally, when you mount the "/usr/src/app/node_modules" directory as a volume, any changes made to the node modules inside the container will be persisted on the host machine.
Plain TextCopy code ports:
- "3000:3000"
This maps port 3000 on the host to port 3000 inside the container. This is very important as when you want to run containers of same image then by default there will be port conflicts so in order to avoid that we use port mapping. For example, if you run 2 containers using nginx image from docker hub then by default both containers will run on port 80 which is the default port of nginx so in order to avoid the port conflict we can map port 80 of container 1 to port lets say 4000 of our host machine and port 80 of container 2 to port 5000 of our host machine.
command: npm run start
This specifies the command to run when the container starts.
depends_on:
- db
This specifies that the "back-end" service depends on the "db" service. So first, db service will run then back-end service will run.
front-end:
This is the name of the second service, which is called "front-end".
build:
context: react
This specifies that the service should be built using the Dockerfile in the "react" directory. As mentioned above.
container_name: front-end
This sets the name of the container to "front-end".
working_dir: /usr/src/app
This sets the working directory inside the container to "/usr/src/app".
networks:
- node-network
This specifies that the service should be connected to the "node-network" network.
tty: true
This allocates a pseudo-TTY for the container.
volumes:
- ./react:/usr/src/app
- /usr/src/app/node_modules
This mounts the local "./react" directory to the "/usr/src/app" directory inside the container, and also mounts the "/usr/src/app/node_modules" directory as a volume.
ports:
- "5173:5173"
This maps port 5173 on the host to port 5173 inside the container.
command: npm run dev
This specifies the command to run when the container starts.
db:
This is the name of the third service, which is called "db".
image: postgres
This specifies that the service should use the "postgres" image from Docker Hub.
container_name: db
This sets the name of the container to "db".
restart: always
This specifies that the container should always be restarted if it stops.
volumes:
- ./data:/var/lib/postgresql/data
This mounts the local "./data" directory to the "/var/lib/postgresql/data" directory inside the container.
tty: true
This allocates a pseudo-TTY for the container.
environment:
- POSTGRES_PASSWORD=1234
This sets the environment variable "POSTGRES_PASSWORD" to "1234".
ports:
- "5432:5432"
This maps port 5432 on the host to port 5432 inside the container.
networks:
- node-network
This specifies that the service should be connected to the "node-network" network.
nginx:
This is the name of the fourth service, which is called "nginx".
build:
context: nginx
This specifies that the service should be built using the Dockerfile in the "./nginx" directory.
container_name: nginx
This sets the name of the container to "nginx".
tty: true
This allocates a pseudo-TTY for the container.
restart: always
This specifies that the container should always be restarted if it stops.
ports:
- "8000:80"
This maps port 8000 on the host to port 80 inside the container.
networks:
- node-network
This specifies that the service should be connected to the "node-network" network.
depends_on:
- back-end
- front-end
This specifies that the "nginx" service depends on the "back-end" and "front-end" services. So first back-end and front-end will be up and running then nginx service will run.
networks:
node-network:
driver: bridge
This defines the "node-network" network and sets its driver to "bridge".
In terms of networking, a bridge network is a Link Layer device which forwards traffic between network segments. A bridge can be a hardware device or a software device running within a host machine's kernel.
In terms of Docker, a bridge network uses a software bridge which allows containers connected to the same bridge network to communicate, while providing isolation from containers which are not connected to that bridge network. The Docker bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other.
Read more about different network driver here.
Running the Fully Containerized Application
To run the fully containerized application, follow these steps:
Open the root directory in your terminal or command prompt.
Execute the following command to run the docker-compose.yml file:
docker-compose up --build
Once the containers are up and running, you can access the application by visiting http://localhost:8000/ in your web browser. The Nginx will redirect to the react application.
If you choose the “Post User” option, you will be directed to http://localhost:8000/post. Here, you can create a new user, which will be recorded in our running Postgres database.
In summary, the process involves the front-end sending a request to Nginx, which proxies the request to the back-end. Since it has the /api route, Nginx acts as a proxy and forwards the request to the back-end. The back-end processes the request, saves data into the database, and generates a response. The response is then sent back to the front-end through Nginx.
Let’s create an example:
Please note that if you attempt to create a user using string characters in the age field, an error will be returned because we defined the table with the constraint age INT NOT NULL
.
If everything goes well, after creating the user, you can navigate to the “Get all user” option, which will direct you to http://localhost:8000/get. Here, you will see the user you just created:
After running docker-compose up — build, you may notice that a new folder named data is created in the root directory. This folder is where the Postgres data is stored, ensuring that you don’t lose the data you created when the containers are removed.
Congratulations! You have successfully run the fully containerized application and interacted with the user creation and retrieval functionalities.
Subscribe to my newsletter
Read articles from Bhavesh Yadav directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
Bhavesh Yadav
Bhavesh Yadav
Passionate full stack developer with expertise in both front-end and back-end technologies. Creating seamless digital experiences through innovative solutions.