The Hidden Cost of Context


How much does request tracing cost your Node.js application? We ran the numbers.
Modern Node.js applications increasingly rely on context propagation for distributed tracing, request correlation, and observability. Features like AsyncLocalStorage and OpenTelemetry have become essential tools for understanding application behavior in production. But what's the real performance cost?
We conducted comprehensive benchmarks comparing the overhead of AsyncLocalStorage and OpenTelemetry instrumentation across Node.js v22.17.1 and v24.4.1, testing five different server configurations to isolate and measure the specific performance costs of different instrumentation approaches.
The Great Context Performance Investigation
Context propagation in Node.js has always been challenging. Before AsyncLocalStorage, developers resorted to complex continuation-local-storage libraries or manual context passing through callback chains. Introduced as a stable feature, AsyncLocalStorage promised a cleaner solution, but at what cost? (I worked hard on this problem a few years ago, and seeing how far we got is fantastic!)
Our investigation started with a simple question: "How much performance do we sacrifice for observability?"
The Test Laboratory
We created a controlled environment to answer this question scientifically:
Hardware Specs:
Intel(R) Core(TM) i7-7700 CPU @ 3.60GHz (4 cores, 8 threads)
62 GB RAM
Linux 5.15.0-140-generic
Load Testing Configuration:
100 concurrent connections
10 requests per connection (pipelining)
10-second test duration with 5-second warmup
Consistent payloads across all scenarios
The Contestants
We benchmarked five different server implementations:
Baseline (
base.js
): Pure Node.js HTTP server without AsyncLocalStorageAsyncLocalStorage (
simple.js
): HTTP server using AsyncLocalStorage for request contextFastify Base (
fastify-base.js
): Fastify server without instrumentationFastify + Full OpenTelemetry (
fastify-otel.js
): Complete observability stack with auto-instrumentationFastify + Selective OpenTelemetry (
fastify-otel-only.js
): Only Fastify's OpenTelemetry plugin
The Shocking Results
Node.js v24.4.1 Performance Showdown
Scenario | Requests/sec (Avg) | Latency (P99) | Performance vs Baseline |
Baseline | 57,301 | 26ms | 100% |
AsyncLocalStorage | 53,450 | 30ms | 93.3% |
Fastify Base | 56,167 | 25ms | 98.0% |
Fastify + Full OTel | 10,640 | 104ms | 18.6% 😱 |
Fastify + Selective OTel | 20,568 | 87ms | 35.9% |
Node.js v22.17.1 Performance Comparison
Scenario | Requests/sec (Avg) | Latency (P99) | Performance vs Baseline |
Baseline | 56,446 | 30ms | 100% |
AsyncLocalStorage | 50,913 | 30ms | 90.2% |
Fastify Base | 56,737 | 26ms | 100.5% |
Fastify + Full OTel | 9,931 | 136ms | 17.6% 😱 |
Fastify + Selective OTel | 20,051 | 92ms | 35.5% |
The AsyncLocalStorage Tax
The numbers tell a clear story:
AsyncLocalStorage overhead: ~7% performance reduction
Framework efficiency: Fastify matches raw Node.js performance
Full observability cost: 81% performance reduction
Selective instrumentation: A middle ground at ~36% of baseline performance
The Node.js v24 Advantage
Here's where it gets interesting. Node.js v24 shows measurable improvements over v22, thanks to specific optimizations:
AsyncLocalStorage Optimizations
Node.js v24 includes critical AsyncLocalStorage performance improvements from:
Detailed Performance Gains by Scenario
Here's how Node.js v24.4.1 performs compared to v22.17.1 across all test cases:
Scenario | Node.js v22 | Node.js v24 | Performance Gain |
Baseline | 56,446 req/sec | 57,301 req/sec | +1.5% |
AsyncLocalStorage | 50,913 req/sec | 53,450 req/sec | +5.0% 🎯 |
Fastify Base | 56,737 req/sec | 56,167 req/sec | -1.0% |
Fastify + Full OTel | 9,931 req/sec | 10,640 req/sec | +7.1% |
Fastify + Selective OTel | 20,051 req/sec | 20,568 req/sec | +2.6% |
Key Insights:
AsyncLocalStorage sees the biggest improvement: 5% better throughput directly benefits context-heavy applications
OpenTelemetry scenarios also improve: Even with heavy instrumentation, v24 shows measurable gains
Baseline performance: Slight improvement in raw HTTP performance
Fastify baseline: Minor regression, likely within margin of error
The AsyncLocalStorage improvement is particularly significant because most real-world applications will benefit directly from upgrading to Node.js v24.4.1.
The OpenTelemetry Reality Check
The most striking finding? Full OpenTelemetry auto-instrumentation carries a massive performance penalty.
In our tests, enabling complete OpenTelemetry instrumentation reduced throughput by over 80%. This highlights a critical point: tracing is expensive and often unnecessary for most observability needs.
Before reaching for distributed tracing, consider lower-cost alternatives:
Metrics: Histograms and counters provide most performance insights at minimal cost
Profiles: CPU and memory profiling reveal bottlenecks more efficiently than traces
Structured logging: Request correlation through AsyncLocalStorage alone often suffices
Distributed tracing shines primarily when dealing with anomalies and complex failure scenarios – but histograms can get you surprisingly far in recognizing performance issues without the overhead.
The selective approach (Fastify OTel plugin only) offers a more balanced trade-off:
Still provides distributed tracing capabilities
Reduces performance impact to ~65% instead of ~81%
But consider: do you need tracing, or would metrics serve you better?
Real-World Implications
For Production Applications
Start with cost-effective observability:
Metrics first: Implement histograms, counters, and gauges for performance monitoring
Profiles second: Use CPU and memory profiling to identify bottlenecks
AsyncLocalStorage for correlation: ~7% overhead for request correlation is often worthwhile
Consider tracing last: Only when you have specific anomaly investigation needs
If you need AsyncLocalStorage:
Budget for ~7% performance overhead
Node.js v24 provides better performance than v22
The benefits often outweigh the costs for request correlation
If you're considering distributed tracing:
Do you need traces, or would metrics/profiles solve your problem?
Most performance insights come from histograms and profiling, not traces
Full auto-instrumentation: Expect 80%+ performance impact
Consider selective instrumentation only if you have a specific tracing use case
Implement sampling strategies to reduce overhead
If performance is critical:
Prioritize metrics and profiling over distributed tracing
Measure the impact in your particular environment
Remember: tracing is most valuable for anomaly investigation, not general monitoring
The Developer's Dilemma
This creates an interesting trade-off matrix for observability approaches:
Scenario | Performance | Observability Value | Cost-Effectiveness | Complexity |
No instrumentation | ⭐⭐⭐⭐⭐ | ⭐ | ⭐⭐⭐⭐⭐ | ⭐ |
Metrics + Profiles | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐ |
AsyncLocalStorage + Logging | ⭐⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐ |
Selective Tracing | ⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐ | ⭐⭐⭐ |
Full OpenTelemetry | ⭐ | ⭐⭐⭐⭐⭐ | ⭐ | ⭐⭐⭐⭐ |
Key insight: Metrics and profiling offer the best cost-effectiveness ratio, providing high observability value with minimal performance impact.
The Bottom Line
AsyncLocalStorage is remarkably efficient for what it provides. A ~7% performance cost for robust context propagation is reasonable for most applications.
OpenTelemetry's comprehensive instrumentation is expensive but provides unparalleled observability. The key is choosing the right level of instrumentation for your needs.
Node.js v24 delivers meaningful improvements for AsyncLocalStorage performance, making the upgrade worthwhile for context-heavy applications.
Moving Forward
These benchmarks highlight the importance of measuring observability costs in your specific environment. While our results provide valuable baselines, your application's performance characteristics may differ.
Recommendations:
Start with metrics: Implement histograms and counters before considering tracing
Add profiling: CPU and memory profiles reveal more bottlenecks than traces
Use AsyncLocalStorage for correlation: The ~7% overhead is reasonable for request context
Consider tracing last: Only implement when you have specific anomaly investigation needs
Measure continuously: Profile observability overhead in your production environment
Upgrade strategically: Node.js v24's AsyncLocalStorage improvements justify upgrading
The future of Node.js observability looks bright, with continued performance optimizations (thanks to contributors and sponsors like DataDog) making context propagation increasingly viable for high-performance applications.
Want to run these benchmarks yourself? The complete benchmark suite, which includes all test scenarios and detailed results, is available.
Need Help Optimizing Your Node.js Application?
These benchmarks highlight the complex performance considerations in modern Node.js applications. If you're struggling with:
Performance bottlenecks in your Node.js services
Observability overhead that's impacting your application
Context propagation implementation challenges
Node.js upgrade performance planning
Platformatic builds open-source and commercial libraries for high-performance Node.js applications and offers professional support to help you navigate these trade-offs. Our expertise spans Node.js performance optimization, observability implementation, and building production-ready applications that scale.
Contact us to discuss how our libraries and support services can help optimize your Node.js application's performance while maintaining your desired observability.
Subscribe to my newsletter
Read articles from Matteo Collina directly inside your inbox. Subscribe to the newsletter, and don't miss out.
Written by
