How V8 Takes Out the Trash: A Practical Look at JavaScript Garbage Collection
A hands-on walkthrough of how V8 manages memory behind the scenes — generational GC, scavenging, mark-sweep, and what it actually means for your code.
Most JavaScript developers never think about garbage collection — until something leaks memory in production at 3 AM. Then it's suddenly very interesting.
V8's garbage collector is one of those things that "just works" right up until it doesn't. Understanding how it operates won't just help you debug memory issues — it'll change how you write code. Let's walk through the machinery.
The Heap: Where Your Objects Live
Every time you create an object, array, closure, or string in JavaScript, V8 allocates it on the heap. The heap isn't one big blob — it's divided into distinct spaces, each with a specific purpose:
- New Space (Young Generation) — small, fast, where freshly allocated objects land
- Old Space (Old Generation) — larger, for objects that survived a few GC cycles
- Large Object Space — objects too big to move around efficiently
- Code Space — compiled machine code (JIT output)
- Map Space — hidden classes (V8's internal object shape descriptors)
The key insight: most objects die young. A temporary array you built to map over some data? Gone before the next GC cycle. V8 exploits this aggressively.
The Generational Hypothesis
V8's entire GC strategy rests on one observation: most allocations become garbage almost immediately. This is the generational hypothesis, and it holds true across almost every real-world JavaScript workload.
Because of this, V8 uses two different collectors optimized for different lifetimes:
- Scavenger (Minor GC) — handles the young generation, runs frequently, is very fast
- Mark-Sweep / Mark-Compact (Major GC) — handles the old generation, runs less often, is more expensive
The Scavenger: Fast and Ruthless
New Space is split into two equally-sized semi-spaces: from-space and to-space. Allocation happens in from-space using a simple bump pointer — just increment an offset. This makes allocation extremely cheap.
When from-space fills up, the scavenger kicks in:
- Walk all root references (stack, globals, handles)
- Copy live objects from from-space into to-space
- Update all pointers to reflect new locations
- Swap the labels — to-space becomes from-space, and vice versa
Everything that wasn't copied? It's dead. No need to "free" anything — the old from-space is just wiped clean.
Objects that survive two scavenge cycles get promoted to Old Space. They've proven they're not ephemeral.
Here's a simple example that creates a lot of short-lived garbage:
function processItems(items) {
// Each .map() creates a temporary array — perfect scavenger fodder
return items
.map(item => ({ ...item, processed: true }))
.filter(item => item.active)
.map(item => item.id);
}Those intermediate arrays? The scavenger eats them for breakfast. This pattern is totally fine from a GC perspective. The objects are born, used, and die within a single call — exactly what the young generation is built for.
Mark-Sweep: Handling the Survivors
Old Space uses a different strategy. Copying everything around would be too expensive — Old Space is much larger than New Space, and the objects in it tend to stick around.
Mark phase: Starting from GC roots, V8 traverses the entire object graph and marks every reachable object.
Sweep phase: V8 walks through Old Space and adds unmarked (dead) memory to free lists, making it available for future allocations.
Compaction (optional): Over time, sweeping creates fragmentation — little gaps of free memory between live objects. When fragmentation gets bad enough, V8 compacts by moving live objects together and updating pointers. This is expensive, so it's done selectively on the most fragmented pages.
Incremental Marking and Concurrent GC
A naive mark-sweep would stop the world — freeze your application while it traces the entire heap. For a server handling requests, that's a non-starter.
V8 uses several techniques to keep pauses short:
- Incremental marking — break the marking work into small chunks interleaved with application execution
- Concurrent marking — helper threads mark the heap while your code runs on the main thread
- Concurrent sweeping — sweeping also happens on background threads
- Lazy sweeping — don't sweep a page until you actually need the memory
The result: major GC pauses in modern V8 are typically in the low single-digit milliseconds, even for heaps in the hundreds of megabytes. That's genuinely impressive engineering.
Write Barriers: Keeping Track of Cross-Generation Pointers
There's a subtle problem with generational collection. What if an old object gets a new property pointing to a young object?
const longLived = {}; // Promoted to Old Space
// ... later ...
longLived.cache = { temp: true }; // New object in New SpaceWhen the scavenger runs, it only looks at New Space roots — it doesn't scan all of Old Space (that would defeat the purpose). So it would miss that longLived.cache is keeping { temp: true } alive.
V8 solves this with write barriers. Every time your code writes a reference into an object, V8 checks: did we just store a young pointer in an old object? If so, it records that reference in a remembered set. During scavenging, the remembered set is treated as additional roots.
This is a bookkeeping cost, but it's worth it — scanning a small remembered set is vastly cheaper than scanning the entire old heap.
What Actually Causes Memory Leaks
Understanding V8's GC makes memory leaks easier to reason about. A "leak" in a garbage-collected language just means you're accidentally keeping references alive. Common culprits:
// 1. Forgotten event listeners
class JsonStream {
constructor(socket) {
// This closure captures `this` — if you never remove
// the listener, this JsonStream instance lives forever
socket.on('data', (chunk) => {
this.handleChunk(chunk);
});
}
}
// 2. Growing data structures with no bound
const requestLog = [];
app.use((req, res, next) => {
requestLog.push({ url: req.url, time: Date.now() });
// This array grows forever. Every request object is retained.
next();
});
// 3. Closures capturing more than they need
function createHandler(hugeConfig) {
// This closure keeps `hugeConfig` alive even though
// it only needs one property
return () => {
console.log(hugeConfig.name);
};
}None of these are V8 bugs — they're all reachable objects, so GC correctly keeps them alive. The fix is always structural: remove listeners, bound your caches, narrow your closures.
Practical Tips for Working With (Not Against) V8's GC
Let short-lived objects be short-lived. Functional patterns with .map(), .filter(), spreading — these create temporary objects, and that's fine. The scavenger handles them with minimal cost. Don't contort your code to avoid allocations unless profiling tells you to.
Be cautious with long-lived caches. If you cache things in a Map or plain object that grows unbounded, you're fighting the GC. Use WeakMap when your cache keys are objects, or implement an eviction policy:
class LRUCache {
constructor(maxSize = 1000) {
this.cache = new Map();
this.maxSize = maxSize;
}
get(key) {
if (!this.cache.has(key)) return undefined;
const value = this.cache.get(key);
// Refresh position by re-inserting
this.cache.delete(key);
this.cache.set(key, value);
return value;
}
set(key, value) {
if (this.cache.has(key)) this.cache.delete(key);
this.cache.set(key, value);
if (this.cache.size > this.maxSize) {
// Delete the oldest entry (first inserted)
const oldest = this.cache.keys().next().value;
this.cache.delete(oldest);
}
}
}Use --max-old-space-size intentionally. Node.js defaults to around 1.5–2 GB for Old Space on 64-bit systems. If your workload legitimately needs more, set it explicitly. If it's leaking, increasing the limit just delays the crash.
Profile before optimizing. Chrome DevTools and Node's --inspect flag give you heap snapshots, allocation timelines, and GC traces. Use them:
# Expose GC stats
node --trace-gc app.js
# Get detailed GC info
node --trace-gc --trace-gc-verbose app.js
# Take heap snapshots programmatically
node --inspect app.jsWhat's Overrated
Manual GC hints. You can call global.gc() if you start Node with --expose-gc, but in practice this almost never helps. V8's heuristics for when to collect are battle-tested across billions of Chrome tabs and Node processes. You're unlikely to outsmart them.
Object pooling in JavaScript. In languages with expensive allocation (C++, Rust), object pools make sense. In V8, allocation is a bump pointer increment — it's already near-free. Pooling adds complexity and can actually hurt GC performance by keeping objects alive longer than necessary, pushing them into Old Space where collection is more expensive.
WeakRef and FinalizationRegistry for resource management. These were added to the language, and they have their uses, but the spec explicitly says you shouldn't rely on them for critical cleanup. GC timing is non-deterministic — use explicit close() / dispose() patterns for resources like file handles and database connections.
Wrapping Up
V8's garbage collector is a sophisticated piece of engineering, but the mental model is straightforward: objects are born in a small nursery, most die there cheaply, survivors get promoted to a larger space with a different collection strategy, and the whole thing runs mostly in the background.
Write clear code. Don't hold references longer than you need to. Bound your caches. Profile when things feel off. The GC will handle the rest.