The Cleanup Episode

Every codebase has a moment where you stop adding features and start asking “what have I done?” This was that moment. I ran two code review passes — one focused on the new shape nodes, one across the entire codebase — and then tackled three architectural performance improvements. The result is a codebase that’s measurably faster, considerably safer, and slightly less embarrassing.

The shape node audit

The six new shape nodes (Ellipse, RoundRect, Polygon, Star, Arc, Ring) shipped in the previous session. They worked. They had tests. They had documentation. They also had some… interesting implementation choices.

The Ring node was drawing circles with 360 line segments. One degree per segment. 362 allocations per circle. Two circles per ring. That’s 724 path segments for a donut.

The fix: 4 cubic Bezier curves per circle. It’s the standard technique — control point distance of k ≈ 0.5522847498 gives you sub-pixel accuracy. Error is less than 0.027%. The ring went from 724 segments to 10. Same visual result, 27x fewer allocations. The Arc node got the same treatment.

Polygon and Star were cloning their entire segment vector before passing it to the draw command. Totally unnecessary — the segments were built locally and never used again after being passed to LayerData. Removed the .clone(), moved ownership directly.

No radius clamping anywhere. Pass a negative radius to any shape node and you’d get… something. Maybe inverted geometry. Maybe invisible output. Maybe undefined behaviour in kurbo. Now all eight shape nodes clamp radii to 0.0.

Polygon sides and Star points were unbounded. Want a polygon with 2 billion sides? Sure, enjoy your OOM. Capped at 1000, which is already overkill — past about 64 sides it’s visually indistinguishable from a circle, but I’m not here to judge your artistic choices.

14 issues fixed across 9 files.

The full codebase review

Then I did the same thing to the rest of the codebase. This one was humbling.

Five unwrap() calls in the render pipeline

Five. In the render pipeline. The thing that runs 60 times a second. Each one was a potential panic that would crash the entire application.

// Before: courage
let node = graph.node(node_id).unwrap();

// After: cowardice (the good kind)
let Some(node) = graph.node(node_id) else { return; };

All five replaced with let-else patterns that log and gracefully skip instead of panicking. The app should never crash because a node got removed between frames.

The 274KB font copy

The theme module was loading the embedded font data — a static &[u8] baked into the binary — and then copying it onto the heap every time it was referenced. 274KB of font data, memcpy’d into a fresh Vec<u8>, 60 times a second.

The fix wraps the static slice in an Arc directly. Zero copies. The font data lives in the binary’s read-only segment and stays there.

I’d like to say I caught this with a profiler. I did not. I caught it by reading the code and going “wait, that can’t be right.”

Z-order hit testing was backwards

Remember the draw order system from the last session? Nodes render front-to-back using a draw_order vector. Beautiful. Except node_at_pos() and pin_at_pos() were still iterating the HashMap to find what’s under the cursor. So you’d click on the top node and select the one behind it.

The fix: iterate draw_order in reverse so the topmost node wins. The kind of bug that makes you wonder how you didn’t notice it immediately, until you remember you only had three nodes on screen.

The fuzzy search scorer was calling ch.to_string() for every character in every candidate string. That’s a heap allocation per character. For search. Which runs on every keystroke.

Replaced with ch.encode_utf8(&mut buf) using a stack buffer. Same result, zero allocations.

The Distinct node lied about itself

The Distinct node’s description said “removes duplicate values.” It actually removes consecutive duplicates. [1, 2, 1] stays [1, 2, 1] — it’s a dedup, not a unique. Fixed the description to match the actual behaviour. Sometimes the bug is in the docs.

The graph comment that was wrong

A comment on propagate_dirty said “BFS traversal.” The code does DFS. I don’t know who wrote that comment but I have my suspicions and I’m not pressing charges.

25+ issues fixed across 18 files.

Three performance improvements

After cleaning up the mess, I tackled three architectural changes that affect every frame.

Arc<LayerData> — O(1) layer cloning

PinValue::Layer used to hold LayerData directly. Every time a layer value crossed a wire, got auto-spread, or passed through coerce(), the entire struct — including all its DrawCommand vectors — got deep-cloned.

Now it holds Arc<LayerData>. Cloning is an atomic refcount bump. O(1). The shape transform nodes (Translate, Rotate, Scale, Group) needed updating to unwrap the Arc, but the payoff is huge: a patch with 100 shapes connected through transform chains was doing thousands of unnecessary deep copies per frame.

Zero-allocation output maps

The evaluator creates a HashMap<String, PinValue> for each node’s outputs, every frame. That’s a fresh HashMap with fresh String keys, 60 times a second, for every node.

Now eval.rs pre-populates the output map with pin name keys from NodeInfo before calling process(). The process context uses get_mut() to update existing values in-place. After the first frame, the output path does zero String allocations.

Wire drag overlay

When you drag a wire, compatible pins on other nodes pulse to show where you can connect. The pulse animation was stored in the node render cache, which meant every compatible node’s cache was invalidated every frame during a wire drag. Dragging a wire in a patch with 50 nodes was rebuilding 30+ node renders per frame.

Now the pulse is drawn as a post-cache overlay. The node cache stays valid. Wire dragging went from “rebuild everything” to “draw a few circles on top.”

The numbers

WhatCount
Adversarial issues fixed39
unwrap() calls removed5
Heap copies eliminated274KB/frame
Path segments reduced (Ring)724 → 10
Files touched28

The codebase is the same size. It just does less stupid stuff now.

← Back to blog