Node.js Concurrency Model Explained for Java Developers


Last week, I ran into a production issue in a Node.js application while the service owner was on leave.
As a lifelong Java developer, I was nervous — I’d never touched Node.js before, and the fix was urgent.
Thanks to Windsurf AI Agent, I managed to push a fix quickly. But this experience made me realize:
if I ever have to work on Node.js again, I can’t just wing it — I need to understand it.
So, I started digging into how Node.js handles concurrency. Coming from Java, the first thing that shocked me was that Node.js is single-threaded.
My immediate questions were:
- How does Node.js handle high concurrency if it’s single-threaded?
- How can it manage 1000–5000 requests per second?
- Is it even built for such workloads?
Here’s what I learned.
Key Takeaways
- Node.js is single-threaded but uses the event loop and libuv to handle I/O efficiently.
- Network I/O is non-blocking — no extra threads are created.
- Blocking I/O (like file reads) uses a libuv thread pool (default 4 threads).
- CPU-heavy tasks still block the main thread — use Worker Threads or offload to other services.
- Ideal for I/O-heavy apps, not CPU-bound workloads.
How Node.js Handles Concurrency
Node.js runs a single-threaded event loop.
Here’s the basic flow:
- Incoming requests hit the main thread.
- CPU-bound work is executed immediately in that thread.
- I/O-bound work is delegated to libuv.
- libuv either:
- Registers non-blocking I/O with the OS (e.g., DB queries, HTTP requests).
- Assigns blocking I/O (e.g., file reads) to a thread pool.
- When results are ready, the event loop picks them up and executes callbacks.
Blocking vs Non-blocking I/O
I/O Type | Example | How libuv handles it |
Network I/O | HTTP, TCP, UDP, sockets | Uses non-blocking kernel APIs (epoll/kqueue/IOCP). No thread per request. |
Blocking I/O | fs.readFile, DNS (sync) | Uses libuv thread pool (default 4 threads). Threads process the task, then return it. |
Example: HR Management System
Imagine an HR system built with Node.js.
When an employee clicks Download Salary Slip:
Case 1: DB fetch (Network I/O)
Thousands of Requests
↓
[Node.js Main Thread]
↓
Runs JS + event loop
↓
If CPU-bound → compute (basic validations)
If I/O-bound (e.g., DB)
↓
Register with OS via libuv (no thread)
↓
OS monitors socket readiness
↓
On response → notify libuv
↓
libuv queues callback
↓
Node.js event loop picks up
↓
Executes callback
↓
Sends response to client
Case 2: Disk read (Blocking I/O)
Node.js Event Loop
↓
Blocking task (e.g., fs)
↓
Delegated to libuv thread pool
↓
libuv uses a thread
↓
Thread completes task
↓
Notifies libuv queue
↓
Node.js executes callback
This consumes one of the thread pool slots.
Visual Diagram
┌────────────────────────────┐
│ Thousands of Clients │
└────────────┬───────────────┘
│
HTTP Requests
↓
┌────────────────────────────┐
│ Node.js Main Thread │
│ (Event Loop + JS Engine) │
└────────────┬───────────────┘
│
┌─────────────────────────┴────────────────────────┐
│ │
CPU-Bound Work I/O-Bound Work
(e.g., JSON parsing) (e.g., DB, HTTP, FS)
│ │
Handled inline ┌─────┴─────┐
(can block loop!) │ libuv │
| (C++ Library)
| ┌────┴─────┐
| ┌───────────────┴──────────────┐
Worker threads │ Non-Blocking I/O │
in code would create │ (via OS: epoll/kqueue/IOCP) │
OS level threads │ → No threads needed │
└──────────────────────────────┘
┌──────────────┐
│ libuv Thread │ ← For blocking I/O
│ Pool (4) │ ← e.g. fs.readFile
└────┬─────────┘
↓
Result ready → Queued to libuv → Main Thread → Callback
libuv and OS Interaction
- Network I/O → OS event notification (no thread)
- Blocking I/O → libuv thread pool → Callback queue
Think of it as a message queue between OS, libuv, and the main thread.
Can we make every function async ?
Just marking a function async doesn’t make it non-blocking if the code inside is CPU-heavy.
- This still executes on the main Node.js thread and blocks it.
- Use Promises or async APIs only when doing I/O-bound tasks.
- For CPU-heavy tasks, use Worker Threads.
async function cpuHeavy() {
for (let i = 0; i < 1e9; i++) {} // This still blocks
}
const p = new Promise(resolve => {
for (let i = 0; i < 1e9; i++) {} // blocks main thread
resolve('done');
});
CPU-heavy tasks:
- Freeze the event loop
- Delays all requests Need Worker Threads or offloading to other services
What About Deferred Execution Using setImmediate / setTimeout ?
setImmediate() and setTimeout() defer the work, but the work still has to be done by the main thread.
function cpuHeavyNonBlocking() {
setImmediate(() => {
for (let i = 0; i < 1e9; i++) {}
console.log('done');
});
}
Best Practices for Performance
- Use async APIs for all I/O-bound work.
- Avoid synchronous file or network calls in production.
- Offload CPU-heavy tasks to worker threads or separate microservices.
- Monitor event loop delay to detect blocking code.
Conclusion
Node.js is extremely performant for I/O-bound workloads — think APIs, chat apps, streaming, and real-time dashboards. But for CPU-bound tasks, Java or Go often handle load better due to mature multi-threading models.
You can use worker threads or split CPU-heavy workloads into microservices, but Node.js shines most when non-blocking I/O is the primary workload.
Further Reading
Subscribe to my newsletter
Read articles from Ajit Shukla directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by

Ajit Shukla
Ajit Shukla
“A developer sharing thoughts on code, tech, and building cool stuff.”