The GIGO Assumption
GIGO: 'garbage in, garbage out.' If you supply ill-determined numbers and equations, you get ill-determined results. The converse is tacitly assumed: accurate inputs produce accurate outputs.
Hamming showed both assumptions can be false.
Garbage in, garbage out (true case): the early weather simulation. Small perturbations amplify. Inaccurate inputs produce wildly inaccurate outputs — a divergent direction field.
Garbage in, accurate out (GIGO reversed): the Los Alamos bomb simulation (Chapter 18). The equation-of-state data came from scattered, unreliable sources. Yet the bomb design worked. Why?
Because the computation structure involved second differences: the force on a shell depended on the difference between forces from adjacent shells. Local errors in the equation of state largely cancelled as shells traversed the curve. The computation used an effectively convergent structure.
Accurate in, garbage out: a theoretically possible case. If a computation amplifies small input uncertainties through a divergent direction field, precise inputs still produce imprecise outputs.
The lesson: the reliability of simulation output depends not on input precision alone, but on the whole structure of the computation — specifically, whether errors entering the system are amplified, preserved, or damped.
Feedback Protects Accuracy
Hamming connected GIGO reversal to Harold Black's feedback amplifier insight.
Black's discovery: if amplifier gain is very high, only the feedback resistor needs to be accurate. All other components can be imprecise. The feedback loop stabilizes the output against variations in components.
The same principle operates in simulations with feedback structure:
- The Nike missile guidance system corrected trajectory deviations automatically. Small errors in initial conditions were damped, not amplified. This let Hamming simulate the missile failure using guessed initial conditions — yet recover the correct period of the pitch-yaw energy transfer.
- The atomic bomb computation's second-difference structure acted like feedback: local equation-of-state errors averaged out over a shell's history.
The design implication: good simulation design, like good engineering design, protects accuracy by placing inaccurate components inside feedback loops. Vital quantities — those outside feedback protection — must be measured precisely.
Direction Fields & the Tube
A first-order ODE dy/dx = f(x,y) defines a direction field: at every point (x,y) in the plane, the slope f(x,y) points in the direction the solution must move.
A diverging direction field: small deviations from a true solution path grow. Errors amplify.
A converging direction field: large deviations shrink back toward the true path. Errors damp.
Both can occur in the same equation at different points. The solution accuracy depends on where you evaluate — not on any absolute property of the equation.
Hamming visualized accuracy as a 'tube' around the true solution. In 2D, the tube expands in diverging regions and contracts in converging ones. In n dimensions (the Navy intercept problem used 28 equations), the tube geometry becomes non-intuitive. The n-dimensional paradox from Chapter 9 applies: high-dimensional tubes behave nothing like 2D tubes.
Euler's Method
The simplest ODE solver: from point (xₙ, yₙ), estimate the next point using the current slope:
> yₙ₊₁ = yₙ + h · f(xₙ, yₙ)
where h is the step size. This follows the tangent line at each point — always using 'the slope that was', not a typical slope over the interval. Error accumulates with each step.
Predictor-corrector improvement: predict a value yₙ₊₁ using Euler, evaluate the slope there, then take the average of the slopes at both ends of the interval to make a corrected step. If the predicted and corrected values agree closely, the step size is appropriate. If they diverge, shorten h.
Higher-Order Methods & the Filter Connection
Fourth-degree polynomial predictor-corrector methods (Milne, Adams-Bashforth, Hamming's method) use several past values of the function and derivative to predict the next value.
Hamming identified these methods as recursive digital filters: the output values (positions) are computed from input data (derivatives at past steps) by a linear recurrence — exactly the structure of a digital filter.
This connection has consequences:
- Stability analysis for recursive filters applies directly. The z-transform stability criterion: poles of the filter's transfer function must lie inside the unit circle.
- The step size h controls stability. For a given ODE, there is a maximum h beyond which the numerical method becomes unstable — the computed solution diverges even if the true solution converges.
Stiff equations: when a system has eigenvalues with very different magnitudes (one fast-changing component, one slow), stability requires a step size small enough for the fast component even when the slow component could tolerate large steps. Stiff solvers use implicit methods to allow larger steps without instability.
The frequency vs position tradeoff: classical polynomial methods optimize local position accuracy — the trajectory is close to the true path at each step, but the dynamic 'feel' (frequency response) may be wrong. For a flight simulator, getting the frequency response right may matter more than getting the position right.
Walking the Crest of the Dune
Hamming was given a differential equation for transistor design with a boundary condition at infinity — the boundary condition being the equation's right-hand side set to zero.
The stability analysis was alarming: if y at any point got slightly too large, sinh(y) amplified, the second derivative went strongly positive, and the solution shot to +∞. If y got slightly too small, it shot to -∞. And the instability was bidirectional — integrating in the opposite direction didn't help.
Hamming's image: 'walking the crest of a sand dune.' Once both feet slip to one side, you inevitably slide down.
His solution: exploit the instability as a guidance signal. He integrated a segment of the trajectory on the differential analyzer. If the solution shot upward, he was slightly too high in his slope estimate at the start of that segment — correct downward. If it shot downward, correct upward. Piece by piece, he walked the crest of the dune.
What made this possible: the instability grew fast. A small error in starting slope produced a large, unambiguous deviation — a clear signal about which direction to correct. A mildly unstable problem would have provided no such clear signal.
The professional obligation: 'It would have been so easy to dismiss the problem as insoluble, wrongly posed, or any other excuse you wanted to tell yourself, but I still believe important problems properly posed can be used to extract some useful knowledge.'
The Rorschach Test & Randomness
A Bell Labs psychologist built a machine: 12 switches, a red light, a green light. Subjects set the switches, pressed a button, observed the result, and after 20 tries wrote a theory of how to make the green light come on. Their theory was handed to the next subject, and the cycle continued.
The lights connected to a random source. There was no pattern.
In all the trials, not one Bell Labs scientist — all high-caliber technical staff — ever said: there is no pattern. They all found theories.
Hamming's observation: not one was a statistician or information theorist. Those two fields train practitioners to ask: 'Is what I am seeing really there, or is it merely random noise?'
The implication for simulation: a simulation that can be adjusted until it matches observed data is a Rorschach test. The adjustment process finds a model consistent with the data, but not necessarily the true model. Distinguishing signal from noise requires deliberate statistical discipline — holdout data, pre-specified hypotheses, confidence intervals — not just good intentions.
Hamming's closing charge: 'The What if...? will arise often in your futures, hence the need for you to master the concepts and possibilities of simulations, and be ready to question the results and to dig into the details when necessary.'