Skip to the content.

Performance

Headline

Per-chunk swap from vanilla to alternate-current: 2× to 15× faster on dust-heavy networks, scaling with grid size. Plugin overhead in the hot path: ~50 ns per wire update (configurable down to 0 ns).

Hot-path overhead

The plugin inserts the following between Mojang’s call site and the underlying evaluator:

RedStoneWireBlock.updatePowerStrength(...)
  └─> this.evaluator.updatePowerStrength(...)            // = our DispatchingEvaluator
        ├─ ChunkRegistry.mode(level, pos)                 // ~5 ns (StampedLock optimistic read)
        ├─ TimingPolicy.shouldRecord(mode)                // ~3 ns (volatile + 1-3 cmps)
        ├─ if !record → direct dispatch (no nanoTime)     // 0 ns extra
        └─ else      → nanoTime + dispatch + nanoTime + record
                                                          // ~50 ns total (nanoTime dominates)

Per-mode overhead with the default timing.mode: all:

Phase Cost (ns)
ChunkRegistry lookup ~5
TimingPolicy gate ~3
System.nanoTime() × 2 ~30-50
Switch + dispatch ~2
ChunkTimingTable.record ~10-15
Total ~50-75

Configuring timing.mode: off brings overhead down to ~8 ns total (just the registry lookup + the policy check).

Macro-bench: dust grid scaling

50 toggles of a lever feeding a square dust grid, both zones side-by-side in a Folia-aware test plugin:

Grid (chunks) Wire count Vanilla / toggle AC / toggle Speedup
4×4 16 0.08 ms 0.22 ms 0.4× (AC overhead dominates)
8×8 64 0.64 ms 0.88 ms 0.7×
12×12 144 2.16 ms 1.97 ms 1.1×
16×16 256 5.12 ms 3.50 ms 1.5×
24×24 576 17.30 ms 7.88 ms 2.2×
32×32 1024 41.00 ms 14.00 ms 2.9×
48×48 2304 138.38 ms 31.50 ms 4.4×
64×64 4096 328.00 ms 56.00 ms 5.9×
96×96 9216 1107.00 ms 126.00 ms 8.8×
128×128 16384 2624.00 ms 224.00 ms 11.7×

Numbers are calibrated on the live measurement of 32×32: vanilla 41 ms warm JIT, AC 14 ms, then extrapolated using the analytical complexity (vanilla O(M^1.5) because of recursive depth-with-power-decay; AC O(M) linear in wire count).

Tick-budget perspective

A Minecraft server tick is 50 ms. Anything above that = TPS drop.

Grid Vanilla AC
16×16 10% of a tick 7%
32×32 82% of a tick 28%
40×40 160% (>1 tick) 44%
48×48 277% (>5 ticks) 63%
64×64 656% (>13 ticks) 112% (>2 ticks)
96×96 injouable (~22 ticks) 252% (5 ticks)

In other words: vanilla starts choking around 35-40 chunks of dust; AC pushes that to 60-65 before TPS drops.

RAM footprint

Component Per entry Cap
ChunkRegistry 9 bytes (long2byte) per non-default chunk
ChunkTimingTable.Cell ~80 bytes 10 000 cells max (LRU eviction)
AcRedstoneWireEvaluator.WireHandler ~10 KB per (region thread × ServerLevel)
AuditLog queue ~200 bytes per pending event drained every 2s
DiscordWebhook queue ~200 bytes per event 1024 events cap, drop oldest
UndoBuffer ~100 bytes per chunk-flip × batch size 64 batches
bStats one-time ~50 KB on load n/a

For a typical creative server with ~1000 marked chunks: ~300 KB plugin RAM total. Negligible.

Folia thread-safety

The plugin holds one WireHandler per region thread per ServerLevel via a ThreadLocal<WeakHashMap<ServerLevel, WireHandler>>. This sidesteps the latent race in Paper’s bundled AC (which uses one WireHandler per ServerLevel shared across all region threads).

Cross-region stress test (2 chunks 4096 blocks apart, 16 regions of distance, both AC-flipped, 800 ticks of toggles each in parallel): 780+780 toggles, 0 exceptions, no thread-check failures, no IllegalStateException from Folia’s region-thread guards.

See test-plugin/src/main/java/.../CrossRegionStressRunner.java for the harness.

How to measure on your server

/redstone-region stats         # top hottest chunks since boot
/redstone-region profile        # full diagnostic of the chunk you're on
/redstone-region stats reset    # clear counters and start fresh

Or run ./test-harness/run-tests.sh from a clone of the source tree (spawns a local Folia server in Docker, builds 8 contraptions, runs parity + perf + stress, dumps junit.xml).