The First Large Computation
Richard Hamming's first large-scale simulation: Los Alamos, 1945. The target — design a workable atomic bomb.
The problem that made simulation necessary: no small-scale experiment exists. Critical mass is binary. Either fissile material exceeds critical mass and a chain reaction propagates, or it does not. You cannot run a scaled-down version.
The Spherical Implosion Design
One design used spherical symmetry — an implosion. Engineers divided the material into many concentric shells. For each shell they wrote equations for the forces on both faces, plus the equation of state relating pressure to density.
Time was discretized into intervals of 10⁻⁸ seconds — called 'shakes' (from 'a shake of a lamb's tail'). At each shake, the computation advanced: where does each shell move? What forces act on it?
Three Conditions That Force Simulation
Hamming identified situations where simulation replaces physical experiment:
1. Impossible experiments — critical mass cannot be tested at sub-scale
2. Dangerous experiments — you cannot detonate a bomb for calibration data
3. Too expensive or too slow — atmosphere blocks, weather prediction, missile trajectories
The goal: produce equivalent results, not replicate the physical process exactly. The simulation need not match atom-by-atom reality. It must produce the same observable outcomes within the accuracy the design requires.
Equivalent Results
Hamming's key insight at Los Alamos: the equation-of-state data was inaccurate. Pressure-density relationships came from high-pressure labs, earthquake estimates, stellar core models — all with significant uncertainty.
The engineers read those curves to three decimal places, then tabulated them to five digits. Garbage in, seemingly.
Yet the bomb design worked.
Why? Because the computation took second differences of values on adjacent shells. Any local error in the equation of state averaged out over the history of a shell as it traversed the curve. What mattered: the curvature of the equation of state, and only on average.
Feedback within the computation compensated for inaccurate inputs.
The Repetitive Core
Hamming identified a universal structural feature of large simulations: a highly repetitive inner loop.
At Los Alamos: the same force equations ran for every shell at every time step. The code for one shell ran thousands of times. Without that repetitive structure, the programming cost would have been prohibitive.
This principle generalizes: weather prediction breaks the atmosphere into blocks; the same physics equations update each block. Missile simulations step through the same trajectory equations at each time increment. Transistor design computes the same field equations at each spatial grid point.
Hamming's advice: look early for the repetitive parts of any proposed simulation. Cast the simulation in a form that exploits repetition. A simulation without a tight inner loop is probably poorly structured.
Expert Knowledge as a Hard Prerequisite
Hamming returned repeatedly to a rule he considered non-negotiable: only a domain expert can know what matters.
A simulation expert can structure the code, choose numerical methods, debug the repetitive loop. But only someone fluent in the physics, chemistry, or engineering of the domain can determine:
- Which effects must appear in the model
- Which can safely be omitted
- Whether an unusual result signals a physical truth or a modeling error
At Los Alamos, Hamming was the computing expert. The physicists were the domain experts. Neither could substitute for the other.
Jargon as Barrier & Tool
One of Hamming's strongest operational rules: learn the jargon of the domain you're simulating.
His story: a Navy intercept problem with 28 simultaneous differential equations. He insisted the proposer — a physicist friend — walk through every line of the binary machine code with him before the computation ran.
Mid-walkthrough, the physicist said: 'Dick, that's fin limiting, not voltage limiting.' The same mathematical symbol, the same formal equation — but two different physical interpretations, yielding substantially different results.
The lesson: both parties understood the mathematics. Neither had a communication failure in the usual sense. But the physical meaning of the limiting operation was underdetermined by the equations alone.
Without the walkthrough, the simulation would have run with the wrong interpretation. No runtime error. No obviously bad output. Just wrong answers to an important problem.
Stable vs Unstable Problems
Hamming drew a sharp line between problems that simulation handles well and problems where simulation is nearly impossible.
The atomic bomb: small differences in shell trajectories along the way did not greatly affect the final yield. The simulation was stable — errors in intermediate steps did not amplify.
Weather prediction: the opposite. A small perturbation — 'whether a butterfly flaps its wings in Japan' — can, in principle, determine whether a storm hits the continent. Sensitivity to initial conditions makes day-to-day weather simulation unreliable beyond short horizons.
Both problems use the same mathematical structure: discretize space into cells, discretize time into steps, march forward. The difference lies in the direction field — whether small deviations from a trajectory grow (unstable) or shrink (stable) over time.
Weather shows both: short-term instability (day-to-day chaos), long-term stability (seasons follow their rounds), and very-long-term instability (ice ages).
Hamming's rule: before beginning any simulation, examine whether the problem is fundamentally stable or unstable. If unstable, determine whether the instability is fundamental to the answer you need or an artifact of scale or boundary conditions. Do not discover this after investing months of work.
Simple First, Complete Later
Hamming's preferred method for approaching a new simulation:
1. Start simple — include only the main effects. Get the dominant behavior right.
2. Get insights early — a simple simulation reveals the structure of the problem before you invest in full detail.
3. Evolve toward completeness — add secondary effects progressively, verifying each addition against the simpler baseline.
He illustrated this with the Nike missile project. His early simulations used a simple exponential atmosphere model. Later simulations added altitude-dependent density profiles, crosswind terms, and nonlinear drag. But the early insights — vertical launch reduces drag in the dense lower atmosphere; larger wings cost more speed than they gain in late-stage maneuverability — came from the simple model.
Warning: at the end, freeze the design using the full simulation. The simple simulation earns insights; the full simulation earns the commitment.