Making .NET Objects Live Short Lives

Sagar HSSagar HS
5 min read

In .NET, the earliest departures are the happiest, short-lived objects keep your app nimble.

In high-throughput .NET applications, whether web servers, real-time trading platforms, or game engines, the garbage collector (GC) plays a pivotal role. Its generational design is optimised to reclaim Gen 0 (young) objects in a few milliseconds. By ensuring the vast majority of the allocations die in Gen 0, we minimise pause times, maximise CPU cache locality, and defer expensive Gen 2 collections.

1. Generational GC Anatomy

Objects smaller than 85 KB are allocated into the Gen 0 segment. When that segment fills, the runtime triggers a Gen 0 collection: live objects are copied or marked, unreachable objects are freed, and survivors are promoted to Gen 1. The same cycle applies when Gen 1 fills, promoting survivors to Gen 2. Full (Gen 2) collections compact the entire heap and incur the highest pause times.

Allocations always target Gen 0. A high Gen 0 death rate means fewer promotions, smaller Gen 1/Gen 2 heaps, and less frequent full collections resulting in smoother, more predictable performance.


2. Workstation vs. Server GC

Workstation GC, the default on desktop and client applications, strives for minimal pause times using either single-threaded or concurrent collections. Server GC, enabled with <gcServer enabled="true"/> in your configuration, creates one managed heap per logical CPU and uses multiple threads for both marking and compacting. Server GC typically achieves higher throughput in multicore server environments at the cost of slightly larger pause windows.

Enabling Server GC alongside background (concurrent) collections provides a powerful combination for backend services:

<configuration>
  <runtime>
    <gcServer enabled="true"/>
    <gcConcurrent enabled="true"/>
  </runtime>
</configuration>

3. Favouring Short-Lived Lifetime Patterns

Structs vs. Classes

Classes always allocate on the heap. If you have small, immutable data such as a 2D point or a colour value, define it as a struct. Structs live on the stack or inline within arrays and avoid Gen 0 allocations altogether unless boxed.

struct Coordinate { public double X, Y; }

Collection Reuse

Repeatedly creating List<T>, Dictionary<K,V>, or arrays inside tight loops generates continuous Gen 0 pressure. Pre-allocate and reuse a single instance by calling Clear() between uses and specifying an initial capacity to avoid internal resizing.

var buffer = new List<int>(capacity: 4);
for (int i = 0; i < N; i++)
{
    buffer.Clear();
    buffer.Add(1); buffer.Add(2); buffer.Add(3);
    Process(buffer);
}

Streaming with yield return

Replacing methods that return full arrays with iterator methods eliminates large temporary allocations. Consumers can process elements one at a time without ever building the entire collection in memory.

IEnumerable<int> Generate(int count)
{
    for (int i = 0; i < count; i++)
        yield return i;
}

4. Zero-Allocation Slicing via Span<T> / Memory<T>

Span<T> is a stack-only type representing a contiguous region of memory. It can wrap arrays, strings, stackalloc buffers, or unmanaged memory without allocations. In asynchronous contexts, Memory<T> provides the same slicing capability on the heap.

CSV Parsing
Reading a large CSV file line by line, avoid String.Split (which allocates substrings) by parsing with spans:

void ParseLine(ReadOnlySpan<char> line)
{
    int start = 0;
    while (true)
    {
        int comma = line.Slice(start).IndexOf(',');
        if (comma < 0)
        {
            ProcessField(line.Slice(start));
            break;
        }
        ProcessField(line.Slice(start, comma));
        start += comma + 1;
    }
}

using var reader = new StreamReader("data.csv");
string? text;
while ((text = reader.ReadLine()) != null)
    ParseLine(text.AsSpan());

Fields remain views over the original buffer. Calling .ToString() on a span only when necessary defers allocations to the points of true necessity.


5. Managing the Large Object Heap

Objects equal to or larger than 85 KB bypass Gen 0 and live on the Large Object Heap (LOH), collected only on full GCs and not compacted by default prior to .NET 5. To avoid LOH fragmentation:

Rent large buffers from ArrayPool<T> rather than using new:

byte[] buffer = ArrayPool<byte>.Shared.Rent(256 * 1024);
try
{
    await stream.ReadAsync(buffer, 0, buffer.Length);
    Process(buffer);
}
finally
{
    ArrayPool<byte>.Shared.Return(buffer);
}

In .NET 5 and later, trigger a one-time LOH compaction before a known lull:

GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(); // compacts LOH

6. Profiling and Benchmarking

Real-time observability with dotnet-counters helps track GC metrics:

dotnet-counters monitor --process-id <PID> System.Runtime -e \
System.Runtime/Gen0CollectionCount,System.Runtime/HeapSize

For deeper investigation, capture ETW traces with dotnet-trace or take heap snapshots in Rider, Visual Studio, or PerfView. Use BenchmarkDotNet’s [MemoryDiagnoser] to compare allocation strategies in isolation:

[MemoryDiagnoser]
public class AllocationBenchmarks
{
    [Params(100_000)]
    public int N;

    [Benchmark]
    public void NewObjects()
    {
        for (int i = 0; i < N; i++)
            _ = new object();
    }

    [Benchmark]
    public void StackStructs()
    {
        for (int i = 0; i < N; i++)
            _ = new Coordinate { X = i, Y = i };
    }
}

7. Advanced GC Tuning

Sustained low-latency modes (GCLatencyMode.SustainedLowLatency) postpone Gen 0/Gen 2 collections for short windows, useful during critical operations. Binding the process to specific CPU cores can yield more predictable GC pauses under server GC. Background Gen 2 collections run concurrently with application threads, reducing pause peaks at the cost of marginal throughput.


Conclusion

Creating objects that die young is the cornerstone of a high-performance .NET application. By choosing value types for small data, reusing collections, streaming data with iterators, slicing memory via Span<T>, and managing large allocations through pooling and compaction, you keep Gen 0 collections rapid and rare full GCs at bay. Invest in rigorous profiling, bake these practices into your code reviews, and watch your application scale memory-wise and throughput-wise under any workload.

“When objects die fast, your application lives long.”

0
Subscribe to my newsletter

Read articles from Sagar HS directly inside your inbox. Subscribe to the newsletter, and don't miss out.

Written by

Sagar HS
Sagar HS

Software engineer with 4 + years delivering high-performance .NET APIs, polished React front-ends, and hands-off CI/CD pipelines. Hackathon quests include AgroCropProtocol, a crop-insurance DApp recognised with a World coin pool prize and ZK Memory Organ, a zk-SNARK privacy prototype highlighted by Torus at ETH-Oxford. Recent experiments like Fracture Log keep me exploring AI observability.