Thread Wars: Episode 2 – A New Hope

Table of contents
- Last time, on Thread Wars…
- 1> Enter Virtual Threads – What Are They?
- 2> How Virtual Threads Work Internally (Light Touch)
- 3> Why Virtual Threads Work – Key Benefits for Backend Engineers
- 4> What Can Still Go Wrong
- 5> Before vs After – Service Logic Across Three Models
- 6> Wrap-Up: We Can Block Again

Last time, on Thread Wars…
We fought thread leaks. We tuned pools.
We dove into reactive programming hoping to escape blocking — and came out with stackless nightmares and unreadable code.
The problem was never your logic.
It was the cost of concurrency itself.
Platform threads were just too heavy.
So we rewrote our apps to dance around them.
But what if the problem wasn’t you?
What if the Java platform finally said, “You can write blocking code — and it won’t burn your system down”?
1> Enter Virtual Threads – What Are They?
Java 21 didn’t just ship a feature — it flipped the table on everything we believed about concurrency.
Virtual threads look like threads.
Behave like threads.
But under the hood, they’re nothing like the platform threads we’ve been juggling for decades.
So… What is a Virtual Thread?
A virtual thread is a lightweight thread managed entirely by the JVM, not the operating system. It behaves just like a regular Java thread — you can block, wait, and use the same APIs — but it’s cheap to create, suspendable, and doesn’t hog system resources when idle.
Behind the scenes, it runs on a carrier thread (a real OS thread), but it can be unmounted and remounted transparently by the JVM. You write synchronous code, but get concurrency closer to async scale.
Still Thread
, but Different
You still write:
Thread.startVirtualThread(() -> handleRequest());
or even:
try (var executor = Executors.newVirtualThreadPerTaskExecutor()) {
executor.submit(() -> handleRequest());
}
But here’s what’s changed:
Virtual threads are scheduled by the JVM, not the OS.
Their stack is stored on the heap, not pre-allocated.
They can be suspended and resumed like coroutines.
You can spin up millions of them without tuning a single pool.
Under the Hood (Simplified)
Virtual threads are built on continuations — a JVM-level mechanism that allows pausing and resuming execution.
When a virtual thread blocks on I/O (e.g., socket.read
()
), the JVM:
Unmounts it from the carrier thread (a real OS thread)
Frees up the carrier for other virtual threads
Remounts the virtual thread when I/O is ready
That’s why they're so lightweight — blocking doesn’t mean hogging.
Managed by a Tiny ForkJoin Pool
All virtual threads run on a small, JVM-managed carrier thread pool (usually one thread per CPU core). You don’t configure it. You don’t scale it. You don’t care.
And yet, somehow, your code scales.
The Result
You can write classic, blocking, readable code
You don’t need to use
@Async
,CompletableFuture
, orflatMap()
You don’t even need to think about tuning — unless you're doing something extreme
Virtual threads reclaim the thread-per-request model — and finally make it viable at modern scale.
2> How Virtual Threads Work Internally (Light Touch)
Virtual threads may feel like magic — but they’re built on a very real, very elegant foundation: continuations and user-mode scheduling.
Let’s demystify that without going down a JVM rabbit hole.
The Carrier Thread Model
A virtual thread isn’t tied to an OS thread 1:1.
Instead:
It runs on top of a carrier thread (a real platform thread)
That carrier comes from a small ForkJoin pool, managed by the JVM
When your virtual thread blocks on I/O or
sleep()
— the JVM unmounts it from the carrier
Result?
The carrier thread is now free to run something else — no wasted thread, no context-switching nightmare.
Continuations: The Magic Trick
Under the hood, virtual threads use continuations — a mechanism that lets the JVM pause and resume execution at method boundaries.
When you call something like
socket.read
()
, the JVM pauses the virtual threadIts stack is saved on the heap
When I/O is ready, the stack is restored and the thread resumes exactly where it left off
No callback hell. No event loop juggling.
Just straight-line code that quietly suspends and resumes.
Heap-Allocated Stack
Old threads pre-allocated ~1MB of memory per thread stack.
Virtual threads store their stack on the heap, and only grow when needed.
That’s why you can create millions of them — the memory footprint is fractional unless they’re doing real work.
Scheduling Model
Cooperative: virtual threads yield only at safe points (e.g., blocking I/O, sleep)
Preemptive: not supported (JVM won’t forcefully suspend a running virtual thread mid-method)
Pinned state: if your virtual thread enters native code or synchronized blocks, it can’t be unmounted — and starts behaving like a regular thread
More on that in the gotchas section.
What You Get as a Developer
JVM handles all scheduling
You don’t tune thread pools
You write readable, blocking code — and it behaves like async under the hood
3> Why Virtual Threads Work – Key Benefits for Backend Engineers
Virtual threads don’t just scale — they bring back clarity without compromise.
Here’s what makes them a game-changer for real-world backend code:
1. Cheap to Spawn — No Pool Tuning
You can spin up millions of virtual threads.
There’s no need to:
pre-size a pool
worry about maxQueueSize
handle
RejectedExecutionException
Every incoming request can get its own thread. No rationing. No mental math. Just submit the task and move on.
2. Easy to Read — Linear Code Stays Linear
Remember when blocking code was readable?
Virtual threads let you write plain, top-down logic:
String user = jdbc.fetchUser(id);
emailService.sendConfirmation(user);
No .thenCompose()
, no .subscribe()
, no call chains wrapped in lambdas.
It feels like the code you used to write — except now it scales.
3. Debuggable — Real Stack Traces, Real Breakpoints
No more hunting bugs across async callbacks.
With virtual threads, stack traces are intact. Breakpoints work. Exceptions show the actual call path.
Your tools finally match your execution flow again.
4. Compatible with Existing Blocking APIs
No need to rewrite everything.
Virtual threads work seamlessly with:
JDBC drivers
Traditional file I/O
Blocking HTTP clients
Legacy libraries that don’t know what async is
You can modernize your thread model without refactoring your entire codebase.
4> What Can Still Go Wrong
Virtual threads aren’t magic. They solve the thread scalability problem — not the everything problem.
Here’s what can still burn you if you’re careless:
1. Pinned Threads = Silent Downgrade
If a virtual thread enters native code or holds a monitor lock (e.g., via synchronized
), it gets pinned to a carrier thread.
While pinned:
It can’t be unmounted
It blocks the carrier thread like a traditional platform thread
You lose the scalability benefits
Do this enough times and you’re back to thread pool hell — just without the configuration knobs.
2. synchronized
Is Still a Trap
Virtual threads don’t magically fix coarse locking.
If multiple virtual threads contend for a synchronized
block or method, only one runs at a time — and all others are pinned while waiting.
Prefer:
ReentrantLock
withtryLock()
(non-blocking)Fine-grained locking or lockless designs
Avoid shared mutable state where possible
3. Misusing ThreadLocals Can Still Bite
Virtual threads do support ThreadLocal, but be mindful:
ThreadLocal values don’t magically clean up — same memory leak risks
Use
ThreadLocal.withInitial()
ortry-with-resources
patternsConsider using Scoped Values (newer, safer alternative)
4. Blocking Inside Virtual Threads Is Fine — Until It Isn’t
Blocking I/O? ✅
Waiting on a socket or database? ✅
Calling third-party code that blocks and synchronizes internally? ❌
You need to understand what you’re blocking on.
Otherwise, you may end up bottlenecking on something you don’t control.
5. Still Not Suited for CPU-Bound Massive Parallelism
If your workload is CPU-heavy, throwing a million virtual threads at it doesn’t help. You’ll just saturate the cores and get thread contention.
Virtual threads shine when your system is I/O-bound — where traditional threads would sit idle, wasting memory.
Bottom line: virtual threads let you block — but that doesn’t mean you should block blindly.
You now have a powerful tool — just don’t treat it like a magic wand.
5> Before vs After – Service Logic Across Three Models
Let’s compare a common backend pattern:
Fetch user details from DB → Send confirmation email.
1. Traditional — ExecutorService + Blocking
@Service
public class NotificationService {
private final ExecutorService pool = Executors.newFixedThreadPool(100);
public void notifyUser(String id) {
pool.submit(() -> {
String user = jdbcService.fetchUser(id);
emailService.sendConfirmation(user);
});
}
}
Downsides:
You manage thread limits manually
Risk of saturation and queue backlog
Performance tuning becomes a job in itself
2. Reactive — Chained Asynchronous Flow
@Service
public class NotificationService {
public Mono<Void> notifyUser(String id) {
return jdbcClient.findUser(id)
.flatMap(user -> emailClient.sendConfirmation(user))
.then();
}
}
Gains:
Non-blocking throughout
Handles high concurrency well
Tradeoffs:
Control flow becomes fragmented
Stack traces vanish
Higher learning curve across the team
🧵 3. Virtual Threads — Simple, Scalable, Blocking
@Service
public class NotificationService {
private final ExecutorService executor = Executors.newVirtualThreadPerTaskExecutor();
public void notifyUser(String id) {
executor.submit(() -> {
String user = jdbcService.fetchUser(id);
emailService.sendConfirmation(user);
});
}
}
Benefits:
Looks like plain Java
No thread tuning required
Blocking JDBC + email clients work out of the box
Debugging and tracing remain intact
Bottom line?
Virtual threads don’t change how you write business logic — they change how much it costs to run it.
Readable, blocking code. Reactive-scale concurrency. No thread acrobatics.
6> Wrap-Up: We Can Block Again
For years, we danced around blocking.
Not because it was wrong — but because threads were too expensive to afford it.
Virtual threads don’t introduce a new paradigm.
They remove the burden that made old paradigms unscalable.
No more:
pool tuning
async chaining
wrapping everything in
.submit()
or.flatMap()
You can write clean, predictable, synchronous logic — and still serve massive concurrency.
This isn’t just a language-level improvement.
It’s a shift in how we design and reason about backend systems.
Coming Soon in Episode 3 – Rise of the Virtual Threads
Real-world benchmarks: how virtual threads actually perform
Structured concurrency: scoping, cancellation, lifecycle management
Where virtual threads don’t fit — and what patterns to avoid
Tuning tips, monitoring, and what changes in production observability
The thread wars aren’t over — they’ve just moved to a higher level.
Subscribe to my newsletter
Read articles from Harshavardhanan directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
