Zero-Allocation Evaluation
With 30 new animation nodes and patches getting bigger, I started looking hard at the evaluation loop. And I didn’t love what I found.
Every frame, the evaluator was allocating a new Vec for dirty nodes, cloning NodeInfo for each dirty node to check pin types, allocating new HashMaps for each auto-spread iteration, building a new adjacency map for dirty propagation, and allocating a Vec<String> for the node type list used by the UI.
At 10 nodes? Fine. At 100? Fine. At 1000 nodes running at 60fps? That’s 60,000 allocations per second just for bookkeeping. Rust doesn’t have a garbage collector, but the allocator is still doing work. Unnecessary work.
EvalState
I introduced EvalState, a struct that owns all the reusable buffers the evaluator needs:
pub struct EvalState {
dirty_list: Vec<NodeId>,
adjacency: HashMap<NodeId, Vec<NodeId>>,
spread_inputs: HashMap<String, PinValue>,
merged_keys: HashSet<String>,
changed_pins: HashSet<String>,
}
EvalState::new() pre-allocates everything with with_capacity. Every frame, the buffers get cleared and reused. No new allocations. The same memory gets recycled frame after frame.
The adjacency map for dirty propagation gets built once and only invalidated when the topology changes. Add a node, remove a wire, the map rebuilds. But frame to frame with no structural changes? Zero work.
Auto-spread iterations accumulate directly into a reusable buffer with in-place updates instead of allocating a new HashMap per iteration.
NodeInfo lookups borrow &NodeInfo from the cache instead of cloning. Zero cost.
Hardening
While I was in there, I did a deep pass fixing things that would bite me later.
I had 6 places where recompute_topology() could fail and the result was silently discarded. Now every one logs an error. If topology recomputation fails, I want to know about it.
Every .unwrap() in eval.rs, graph.rs, and the animation plugins is gone. If something’s None that shouldn’t be, it logs an error and continues. A single bad node doesn’t bring down the whole graph.
NodeId went from u64 to u128. Forward-looking: u128 is UUID-compatible, so when collaborative editing arrives, node IDs from different machines won’t collide.
All the easing curves were duplicated across nodes. I extracted them into lux_core::math, one implementation used everywhere. The Ease node, the Ramp node, the ADSR, the Timeline all share the same code now.
Counter overflow protection: if you set a range where max is less than or equal to min, it clamps instead of overflowing. The kind of bug that shows up at 2am during an installation.
What this means
The eval loop is now allocation-free in the steady state. Buffers are pre-allocated and reused. Cache invalidation is explicit, not implicit. NodeInfo is borrowed, not cloned.
This is the foundation for scaling to 1000+ nodes at 60fps. The complexity lives under the hood. The user just connects nodes and everything runs fast.
That’s the whole point.