Measures the full render pipeline cost of building a widget tree from scratch: state update → view function → tree diff → drawlist serialization → frame delivery. Parameterized by item count (10, 100, 500, 1000). Each iteration produces a complete frame through the backend.
Measures the full pipeline cost of a single state change on an already-mounted application. Uses a small counter app to isolate per-update overhead. Each iteration: app.update() → dirty flag → view rebuild → diff → drawlist → requestFrame().
Measures memory allocation patterns during sustained rendering (2000 iterations). Tracks peak RSS, heap usage, and memory growth via periodic sampling with linear regression for leak detection.
Warmup: 50–100 iterations discarded before measurement
GC: Forced between frameworks via --expose-gc
Timing: performance.now() per iteration, with full statistical analysis (mean, median, p95, p99, stddev, CV)
Backend stubs: Each framework uses a backend stub that captures frames without terminal I/O:
Rezi native and ink-compat: BenchBackend (in-memory RuntimeBackend that records frame count/bytes)
Ink: MeasuringStream (Writable stream that records write count/bytes)
Pipeline equivalence: All three frameworks go through their full render pipeline. Rezi native uses createApp → app.view() → app.start() → app.update() → waitForFrame(), same as ink-compat. Ink uses render() → rerender() → waitForWrite().
Ink's ~16ms floor: Ink internally throttles renders to 32ms intervals. This means even trivially fast operations appear to take ~16ms on average. This is an architectural choice, not a benchmark artifact.
Rezi native vs Ink-on-Rezi: The gap between these two shows the overhead of the React reconciler. For applications that need React compatibility, ink-compat is still dramatically faster than stock Ink.
Stability (CV): Coefficient of variation — lower means more consistent timing. Higher CV at small scales is expected because OS scheduling noise is proportionally larger relative to microsecond measurements.