Scaling Node-RED for HTTP based flows

At the time of writing, there is no clear, step-by-step guide available online for horizontally scaling Node-RED. In this article, we’ll walk through how to scale Node-RED specifically for flows triggered via the HTTP-IN node.
Note that this is not full horizontal scaling of Node-RED—only HTTP-IN-triggered flows will work out of the box. If your flows begin differently, you may need to make some adjustments accordingly.
All code and configuration files mentioned here are available in this GitHub repository.
The Problem
Load balancing requests between multiple instances is fairly easy today. There are multiple open source load balancers available. With little configuration, you can have a minimal load balancer up and running in no time.
But before you start load balancing requests between multiple instances of a component, you first need to make sure that component is stateless. In our case, that component is Node-RED, which is not stateless at all.
Even tho Node-RED stores flows on disk, any external changes in the flows are not picked up by it until a restart or manual reload. So, even if you run multiple Node-RED instances pointing to a single file on disk, changes by one instance will not be automatically picked up by the other instances.
Similarly, the Node-RED context is stored in memory by default. So if we simply run multiple instances of Node-RED, each instance will be working with its own separate context.
Solving the Problem
Now that we know the problem, lets look at the architecture of we will implement to solve it. Later in the article, we will discuss each part in detail while doing the implementation.
Architecture
We’ll run a dedicated Node-RED instance for managing and editing the flows. This is the instance that the client will be accessing to manage the flows through the editor UI. In parallel, we’ll run multiple, completely separate Node-RED instances—each responsible solely for processing incoming requests. These are the instances that will actually execute the flows.
All instances (both editor and processing) will share a common MongoDB database, where the flows will be stored. These flows will be stored using a custom Node-RED storage plugin: node-red-mongo-storage-plugin-with-sync. This plugin not only implements the storage API to save flows in MongoDB, but also watches the MongoDB collection for any changes. So, whenever flows will be changed in the MongoDB by one instance, all other Node-RED instances will be immediately notified and reload the updated flows automatically. This ensures all instances are always running the latest version of the flows—without needing a restart or manual intervention.
To route incoming requests correctly, we’ll place an NGINX reverse proxy in front of everything. NGINX will inspect the request path:
Requests starting with
/red-nodes
(which we will set as the base path for HTTP-IN nodes) will be forwarded to the processing Node-RED instances.All other requests will go to the editor instance.
Behind the scenes, NGINX will rely on Docker's internal DNS to load balance incoming requests to the processing instances using a round-robin strategy.
Here’s a quick breakdown of what each part will do:
Node-RED (for frontend requests): One instance, used only to edit flows.
Node-RED (for processing): Multiple instances, used to run flows (you can scale these up or down as needed).
MongoDB replica set: Stores the flows and notifies instances of any changes.
NGINX: Acts as the reverse proxy and load balancer.
This setup will allow us to scale out Node-RED horizontally for HTTP-triggered flows. Each incoming request will be distributed across the available Node-RED instances processing, while all of them stay in sync through MongoDB. Simple, effective, and without needing to patch Node-RED itself.
Auto Syncing the Flow Changes
As mentioned previously, Node-RED does not automatically pickup the changes done externally to the flows. For that, I developed the node-red-mongo-storage-plugin-with-sync plugin, which implements the Node-RED’s storage API to store flows in MongoDB. Additionally, it also watches the collection, where the flows are stored, for any changes. Whenever, a change-log is received from MongoDB, this plugin notifies Node-RED to reload the flows, ensuring that Node-RED always has the latest flows loaded.
I did not write this plugin from scratch, it is built on top of node-red-mongo-storage-plugin, with multiple other enhancements. You can checkout the repository for all details.
Now that we have the plugin to automatically load the latest flows, lets start by creating a custom Node-RED image with node-red-mongo-storage-plugin-with-sync plugin installed and configured. First of all, create a directory for custom Node-RED image in this project’s directory and cd
into it.
mkdir customNodeRed && cd "$_"
In the customNodeRed
directory create a Node-RED’s setting file, named nodered-settings.js
. We will be later using it in our custom Node-RED image. Configure the node-red-mongo-storage-plugin-with-sync plugin in this file by adding the following key-value pairs in the object being exported. This tells Node-RED to use this module for any flow related operations instead of its default implementation of storing flows to “disk”.
{
storageModule : require("node-red-mongo-storage-plugin-with-sync"),
storageModuleOptions: {
mongoUrl: process.env.MONGO_CONNECTION_STRING,
database: 'nodered',
//set the collection name that the module would be using
collectionNames:{
flows: "nodered-flows",
credentials: "nodered-credentials",
settings: "nodered-settings",
sessions: "nodered-sessions"
},
adminApiUrl: "http://localhost:1880"
},
// ......
// ......
// ......
// ...... other settings
}
Also, instead of hard-coding the mongo connection string, we will inject it into the container through the environment variable, when creating the container, thus, process.env.MONGO_CONNECTION_STRING
. The adminApiUrl
is the base URL that will be used by this plugin to call Node-RED’s deploy API with the reload
header, telling Node-RED to reload the flows, whenever a change event is received. Since this plugin will be part of Node-RED’s process during execution, it will be able to reach Node-RED through the localhost
hostname. So, for most cases, any change in this URL will not be required.
Also, since we want the HTTP-IN nodes to listen for requests relative to /red-nodes
, also add the following key value pair in the same settings file. You can find the complete settings file here.
{
// ......
// ......
httpNodeRoot: '/red-nodes',
// ......
// ......
}
Creating the Dockerfile
Now that our settings file is ready. Lets create the Dockerfile for our custom Node-RED image. Create a file named Dockerfile
in customNodeRed
directory and paste the following in it.
FROM nodered/node-red
USER root
WORKDIR /usr/src/node-red
COPY nodered-settings.js /data/settings.js
RUN npm install --prefix /data node-red-mongo-storage-plugin-with-sync
This Dockerfile uses Node-RED’s image as the base image, while using the nodered-settings.js
file we previously created as the settings file for Node-RED. Additionally, it also installs the package node-red-mongo-storage-plugin-with-sync
that we previously configured in nodered-settings.js
file.
Configuring NGINX
Now that our custom Node-RED image is ready, let’s set up NGINX to route incoming requests to the correct Node-RED instance. Create a new directory named nginx
in your project’s root, and inside it, create a file named nginx.conf
. Paste the following configuration into it.
server {
listen 80;
location /red-nodes {
proxy_pass http://nodered-api:1880;
}
location / {
proxy_pass http://nodered-frontend:1880;
}
location /comms {
proxy_pass http://nodered-frontend:1880;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
In this setup, NGINX will act as a reverse proxy. It will
Route requests to
/red-nodes
to thenodered-api
service. This is the base path we configured earlier for all HTTP-IN nodes, so these requests are meant to hit the flow execution instances. This service is backed by multiple containers (using Docker’s internal load balancing), the requests will be distributed across them in a round-robin fashion.All other requests (like when you open the editor UI in your browser) will go to the
nodered-frontend
service, which is the single instance responsible for editing and deploying flows.WebSocket connections at
/comms
will also be proxied to thenodered-frontend
service, with some additional headers to ensure proper WebSocket handling. Node-RED uses this endpoint internally for communication between the editor and runtime, so we need to explicitly support upgrading the connection here.
Docker Compose Setup
Now that we’ve configured our custom Node-RED image and set up NGINX as a reverse proxy, let’s bring everything together using Docker Compose. Create a file named docker-compose.yaml
in your project’s root directory and paste the following into it.
services:
nginx:
image: nginx
ports:
- "8080:80"
depends_on:
- nodered-frontend
- nodered-api
volumes:
- ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf
mongo:
image: mongo
command: ["--replSet", "rs0", "--bind_ip_all", "--port", "27017"]
ports:
- 27017:27017
healthcheck:
test: echo "try { rs.status() } catch (err) { rs.initiate({_id:'rs0',members:[{_id:0,host:'mongo:27017'}]}) }" | mongosh --port 27017 --quiet
interval: 5s
timeout: 30s
start_period: 0s
start_interval: 1s
retries: 30
nodered-frontend:
build: ./customNodeRed
environment:
- MONGO_CONNECTION_STRING=mongodb://mongo:27017
depends_on:
- mongo
ports:
- "1880:1880"
nodered-api:
build: ./customNodeRed
environment:
- MONGO_CONNECTION_STRING=mongodb://mongo:27017
depends_on:
- mongo
deploy:
replicas: 3
ports:
- "1880"
Let’s walk through what each service does:
nginx: This is our reverse proxy that listens on port 8080. It uses the configuration file we created earlier and forwards requests to either the editor instance or the API instances depending on the request path. We mount our
nginx.conf
file into the container so that NGINX picks up our custom routing rules.mongo: We’re running a single-node MongoDB replica set here using the official image. The command field sets up the replica set, while the healthcheck ensures it’s properly initiated. Without this replica set setup, the storage plugin won't work as expected, since it relies on MongoDB change streams, which require replication.
nodered-frontend: This is the single Node-RED instance used to edit flows. It builds from our custom image located in
./customNodeRed
and connects to MongoDB using the connection string we provide via an environment variable.nodered-api: These are the Node-RED instances responsible for running flows. We can scale this service horizontally by adjusting the number of replicas under the deploy section. Just like the frontend, these instances also use the custom image and connect to the same MongoDB database.
The Node-RED instances don’t need to be directly accessible from outside—they should be accessed via NGINX only. In our current
docker-compose
file, we are exposing all of them through a random port, just for testing purposes.
Once this file is ready, spin up the entire stack by running.
docker compose up -d
This will build the custom Node-RED image (if it hasn’t been built already), start MongoDB, and boot up the NGINX proxy along with all the Node-RED containers.
Final Words
Thanks for taking the time to read through this.
Before wrapping up completely, there’s one more thing worth mentioning: I haven’t covered how to store Node-RED context in a central location. If your flows rely on context
storage, you’ll need to externalize that as well to make the setup truly scalable.
One way to do that is by using node-red-context-redis, which allows you to store context in Redis—accessible by all Node-RED instances. Configuring it is pretty similar to how we set up the node-red-mongo-storage-plugin-with-sync
earlier in the article.
Subscribe to my newsletter
Read articles from Ahmad Ghani directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
