Node.js's famous single-threaded event loop is both its strength and its weakness. A single blocking operation freezes the whole app. This article explains how Node.js works internally and how to detect and fix performance issues, with production-grade examples.
Event Loop Anatomy
The event loop has six phases: timers (setTimeout), pending callbacks, idle/prepare, poll (I/O), check (setImmediate) and close callbacks. Every iteration runs through them in order. process.nextTick() and Promise microtasks drain between every phase.
// Typical ordering example
setTimeout(() => console.log('1 timeout'), 0);
setImmediate(() => console.log('2 immediate'));
Promise.resolve().then(() => console.log('3 promise'));
process.nextTick(() => console.log('4 nextTick'));
console.log('5 sync');
// Output:
// 5 sync
// 4 nextTick
// 3 promise
// 1 timeout (or 2 immediate — order depends on OS)
// 2 immediate
Blocking Operations
Four things block the event loop: large JSON parsing, synchronous file I/O, CPU-intensive computation and regex backtracking. While any of them runs, no new request is served.
// Scenario: /api/report takes 30 seconds. ALL users are blocked.
// Bad: heavy CPU work on the event loop
app.get('/report', (req, res) => {
const result = calculateHeavyReport(); // 30 seconds of CPU
res.json(result);
});
// Good: offload to a worker thread
const { Worker } = require('worker_threads');
app.get('/report', (req, res) => {
const worker = new Worker('./report-worker.js');
worker.postMessage({ filters: req.query });
worker.on('message', result => res.json(result));
worker.on('error', err => res.status(500).json({ error: err.message }));
});
// Better: push to a queue, process asynchronously
// Bull, BullMQ, Agenda
queue.add('generate-report', { filters: req.query });
res.status(202).json({ status: 'queued', pollUrl: '/report/:id' });
Measuring Event Loop Lag
// Simple event-loop lag tracker
setInterval(() => {
const start = Date.now();
setImmediate(() => {
const lag = Date.now() - start;
if (lag > 50) {
console.warn(`Event loop lag: ${lag}ms`);
}
});
}, 1000);
// Or use a library
const lag = require('event-loop-lag')(1000);
setInterval(() => {
console.log(`lag: ${lag()} ms`);
}, 5000);
Finding Memory Leaks
Common Node.js leak sources: growing global maps/caches, accumulated event listeners, large objects held in closures, timers that are never cleared.
// CLASSIC LEAK
const cache = new Map();
app.get('/api/:id', (req, res) => {
cache.set(req.params.id, req.body); // no LRU, grows forever
res.send('ok');
});
// FIX: LRU cache
const LRU = require('lru-cache');
const cache = new LRU({ max: 10000, ttl: 1000 * 60 * 10 });
// LEAK 2: Event listeners
emitter.on('data', handler); // a new listener on every request!
// FIX
if (emitter.listenerCount('data') === 0) emitter.on('data', handler);
// or
emitter.once('data', handler);
Heap Snapshots
# Start Node with the inspector
node --inspect server.js
# In Chrome, open chrome://inspect, pick the remote target
# Memory tab, take a heap snapshot
# Take two snapshots five minutes apart and diff them
# Which class has a growing number of instances? That is the leak.
Profiling with Clinic.js
npm i -g clinic
# CPU profile
clinic doctor -- node server.js
# Send traffic for 30 seconds (with autocannon)
autocannon http://localhost:3000 -c 100 -d 30
# Ctrl+C — HTML report opens
# Flame graph (which function burns CPU)
clinic flame -- node server.js
# Bubbleprof (async flow)
clinic bubbleprof -- node server.js
Benchmarking with autocannon
npm i -g autocannon
# 100 concurrent connections for 30s
autocannon -c 100 -d 30 http://localhost:3000/api/users
# POST payload
autocannon -c 50 -d 10 -m POST -H 'Content-Type: application/json' \
-b '{"email":"test@x.com"}' http://localhost:3000/api/login
V8 Optimisations
- Hidden classes — keep object shape stable; define every field in the constructor
- Monomorphic functions — call a function with the same argument types
- Inlining — small functions get inlined; break apart huge ones
- Deoptimisation — a
try/catchinside a hot loop prevents V8 from optimising it
Memory and CPU Limits
# Default Node heap is 1.7GB
# Raise it for big apps
node --max-old-space-size=4096 server.js
# V8 GC trace (for debugging)
node --trace-gc server.js 2>&1 | grep 'Mark-Compact'
# CPU profiling flag
node --prof server.js
# then:
node --prof-process isolate-*.log > profile.txt
Quick Wins
- keep-alive — reuse HTTP agents for outbound calls
- JSON.parse replaced with
simdjson— 5x faster - Streaming — do not buffer big files in memory
- DB connection pool — avoid opening a new connection per request
- gzip — trim response size by 70%
- DNS cache — Node has no default DNS cache; add
cacheable-lookup
Conclusion
Node.js performance is a "measure, find, fix" loop — random optimisation just makes code harder to read. Monitor event loop lag, snapshot memory regularly, use Clinic to pin down bottlenecks, then apply targeted fixes.
Profiling, memory-leak detection and an optimisation plan for your existing app Write to us