Optimize the System, Not Its Components
Hamming's First Rule of Systems Engineering
Hamming's core principle from Ch. 28: If you optimize the components you will probably ruin the system performance.
He illustrated it with the differential analyzer story. Two units were to be connected. The builders improved the amplifiers in the second unit. On acceptance day, Hamming ran the standard test — solve y'' + y = 0, plot y vs y', expect a circle. It failed. The cause: improved amplifiers drew more current through the grounding circuit. The grounding was fine for the original design. It was not rated for the new current level. The interface broke, not the component.
His generalization: most system failures trace to interfaces, not components. Components get designed, tested, certified. Interfaces get designed as afterthoughts, tested rarely, and never certified independently. When a component changes, its interface behavior changes. Nothing downstream was designed for that new interface.
Key asymmetry: a 10× improvement in a component can produce a 10× degradation in the system if the component feeds a constrained interface. The improvement does not add — it subtracts.
The Education System as Failed Systems Engineering
Hamming's Education Case
Hamming applied this principle to education. Optimizing individual subject scores — drilling students to maximize test performance in each subject — produces students who score well on individual tests but cannot integrate knowledge across disciplines.
Each component (subject score) improves. The system (education, defined as integrated understanding) degrades. The interface between subjects — the student's ability to apply knowledge across domains — was never optimized. It atrophied.
This is not an accident of implementation. It is structural. When you measure and reward component performance, you get component optimization. Interfaces are invisible to component metrics.
His prescription: find the bottleneck in the system, then ask what happens downstream when you remove it. Bottleneck removal floods the next queue. An unconstrained optimization becomes a new constraint.
Tracing Interface Degradation
Hamming showed that improving a component changes its interface behavior — and the rest of the system was designed around the old interface.
Nodes, Queues, Surge Scores
A MOAD Factory Model
Every software dependency graph forms a factory. Each node is a workstation. Each edge is a queue. Work enters a node's queue, gets processed, and flows to downstream queues.
Two scores characterize every node:
Surge score = speedup × in-degree
How much work floods downstream when this bottleneck clears. A node with in-degree 5 (5 upstream dependencies all feeding it) and a 100× speedup generates 500× surge downstream.
Betweenness = in-degree + out-degree
How central this workstation is to total flow. High betweenness means many paths pass through this node.
Two archetypes:
Workaholic node: high betweenness, high surge score. This is the bottleneck. Every queue upstream backs up because of it. Remove this bottleneck without staging downstream capacity, and everything downstream collapses simultaneously.
Glutton node: high out-degree, low surge score. Consumes everything fed to it. Feels no pain because its bottleneck is internal, not throughput. The machine that forgets to halt — work enters, nothing exits, and the node reports 'busy' forever.
MOAD-0001 & MOAD-0005: A Coupling Case
The GHC Case
Before a MOAD-0001 patch in GHC's dependency resolver: N=50,000 dependencies took 17 minutes to build. After: 10 seconds. Speedup: 100×.
What happens downstream? Every build cache, artifact store, & CI runner that was pacing itself to 17-minute batch arrivals now receives 100× more completed builds per hour. Caches that were designed to handle 60 build artifacts per hour now receive 6,000.
This is MOAD-0005: the cache stampede defect. Every cache key gets missed simultaneously because no cache was pre-warmed for the new arrival rate. The fix for MOAD-0001 manufactures MOAD-0005.
The coupling is not incidental. It is structural. Any O(N²) → O(N) speedup with in-degree > 1 produces a surge score above 1. A surge score above 100 is a MOAD-0005 candidate.
Staging Before Disclosure
A build system processes 1,000 package dependency graphs per hour. You patch MOAD-0001 in its graph traversal, reducing build time from 60 minutes to 30 seconds — a 120× speedup. The system now processes 120,000 graphs per hour.
When to Stop: The Halt Condition
The Halt Condition
A patch satisfies the halt condition — meaning: do not disclose — when all four conditions hold simultaneously:
1. Patch lives in a live system (merged, deployed)
2. No caretakers assigned to own the downstream impact
3. Downstream defect (MOAD-0005) unresolved
4. Speedup >= 100×
All four together = the baby cries. Assign the team before merging, not after.
A node without a caretaker runs as a workstation with no worker. Work accumulates. Someone collapses. The permacomputer principle: you do not fix the dispatch algorithm without staging the drivers. Three drivers, three million people: unblocking the algorithm creates a thundering herd of unserved requests rather than faster delivery.
WALL-E: Gluttons & Workaholics
The WALL-E Model
Pixar's WALL-E depicts a factory model failure in its clearest form. Gluttons on hover chairs, fed without friction. Workaholics — WALL-E, EVE — dying at their stations to keep the feed running.
The glutton node (the humans on hover chairs) has maximum out-degree: it consumes everything fed to it, produces nothing. Its surge score is zero — it is a sink. It feels no pain because nothing accumulates at its output. It simply consumes.
The workaholic node (WALL-E) has maximum betweenness: everything flows through it. It absorbs all input. It produces the only output. Its surge score, if ever it were replaced by a faster model, would flood every downstream queue simultaneously.
The defect in the WALL-E system is not the gluttons. It is the absent caretaker: no one assigned to balance the workstations. No one staged the capacity before running the algorithm.
The pip Case: Pre-Disclosure Checklist
You discover MOAD-0001 in Python's pip dependency resolver. Measured speedup: 200×. pip runs on approximately 400 million installs per day. PyPI serves the packages.