The Differential Analyzer Story
Hamming's first rule of systems engineering: If you optimize the components you will probably ruin the system performance.
He illustrates it with a story from his own work. He operated a differential analyzer — an analog computer that solved differential equations by mechanical integration. Demand grew, so a second unit was ordered, to be connected with the first so both could operate separately or together.
The builders, proud of their craft, improved the amplifiers in the new unit. Hamming insisted: any improvement must not interfere with the overall system operation. On acceptance day, he ran the classic test: solve y'' + y = 0, plot y vs y', expect a perfect circle. It failed immediately.
The cause: the improved amplifiers drew more current through the grounding circuit. The inadequate grounding, which had worked fine with the original amplifiers, now allowed leakage currents to couple between subsystems. The improvement of one component (amplifiers) degraded the interface (grounding), and the system failed.
The fix was trivial — heavier copper grounding — but the principle was clear: a component improvement changes its interface behavior. The rest of the system was designed around the old interface. Improve the component, break the interface, degrade the system.
Recognizing Component Optimization
Hamming says the rule 'seems so reasonable if you make an isolated component better then the whole system will be better' — and yet it is wrong. The failure is interface-mediated: the component improvement changes the signal the interface sees.
Interfaces Over Components
Hamming's practical conclusion: systems engineers must design and verify interfaces first, components second. A perfect component with a broken interface is useless. A mediocre component with a well-specified interface can be improved later.
Rule 2: the bounding conditions (constraints) of a system are often more important than the optimum values inside those bounds. A system designed to maximize performance at the expected operating point is often fragile: small excursions outside the expected range cause failures. A system designed to operate safely across a broad range — with well-defined constraints — is robust.
Example: a communications system designed for exactly 100 Mbps of traffic at 25°C will fail if traffic spikes to 110 Mbps or temperature rises to 40°C. A system designed with a constraint 'must not exceed 90% utilization at any temperature below 60°C' is more useful, even if its peak performance is slightly lower.
The systems engineer's job: not to optimize A or B individually, but to optimize A+B+C... as a whole, subject to constraints.
The Education System: Failed Systems Engineering
Hamming applies his own principle to education. Over decades, universities have optimized individual mathematics courses: Calculus has been stripped to its essentials, Linear Algebra has been cleaned up and tightened. Each course, assessed individually, looks better.
But viewed as a system, large gaps appeared:
- Mathematical induction: barely mentioned after high school.
- Complex numbers: introduced briefly in algebra, then avoided until late in Linear Algebra when complex eigenvalues appear. Students face two new, difficult ideas simultaneously with no prior preparation.
- Undetermined coefficients: briefly mentioned.
- Impossibility proofs: almost entirely absent.
- Discrete mathematics: largely ignored.
The optimization of each component (each course) created interface gaps: missing conceptual bridges between courses. The system's output — educated engineers and scientists — suffered, even though each course's output metrics improved.
Resisting the Natural Urge to Fix the Broken Part
Hamming's observation: it is easy to say the right words about systems engineering. Very few people can actually do it when the moment comes.
The natural response when a system fails: identify the most obviously broken component and fix it. This is component thinking. The system failed for a reason that involves the interaction of components, interfaces, and constraints — but the most visible failure is usually at a single component.
The systems engineer's discipline: before fixing the visible failure, ask: why did the system produce this failure at this component? Is the component actually underperforming, or is it being asked to operate outside its design envelope by the rest of the system? Fixing the component symptom leaves the system failure intact.
The communication bottleneck in large organizations follows this pattern: a department communicates poorly (visible failure). Component fix: hire better communicators. Systems fix: redesign the information flow architecture so that less communication is required to achieve the same coordination.
Systems Diagnosis
The distinction: a component fix treats a symptom. A systems fix treats the cause. The cause usually involves the structure of the system — which components exist, what interfaces connect them, what constraints bound their operation.