Analog & Digital Precursors
Hamming opened his hardware history with a distinction: analog computing (slide rule, differential analyzer) versus digital computing (Napier bones, desk calculators). Both lines developed in parallel; the digital line eventually dominated.
Napier (1550–1617)
John Napier invented logarithms, which enabled the slide rule — an analog device where physical length represents logarithmic value. Addition of lengths = multiplication of numbers. Napier also designed 'Napier's bones': ivory rods for multiplication assistance. Digital, not to be confused with the slide rule.
From Schickert to Babbage
Wilhelm Schickert (1623) designed a machine for the four arithmetic operations; it burned before completion. Pascal (1623–1662) built an adding machine for tax assessment. Leibniz added multiplication & division but the machines were unreliable.
Charles Babbage (1791–1871) designed the difference engine: a machine to evaluate polynomials at equally-spaced values by successive addition, producing error-free printed tables. He never completed it; a Norwegian father-son team (Scheutz) built working versions. In 1992, a team in England built Babbage's analytical engine from his drawings — it worked.
Babbage's analytical engine was close to von Neumann architecture: a store (memory), a mill (arithmetic unit), & conditional branching. Lady Lovelace published the first programs for it.
Relay Computers to ENIAC
The modern era of computing began with relay computers in the early 1940s. Hamming traced the speed evolution:
| Era | Technology | Speed | |---|---|---| | Pre-1940 | Hand calculator | 1/20 ops/sec | | ~1940 | Relay (Bell Labs) | 1 op/sec | | 1946 | ENIAC (vacuum tube) | ~5,000 ops/sec | | 1952 | IBM 701 | ~17,000 ops/sec | | ~1993 | Modern workstation | 10⁹ ops/sec |
George Stibitz at Bell Telephone Laboratories built relay computers from parts of condemned M9 gun directors. Hamming used one for years. These relay machines were not fast — about one operation per second — but they were reliable & programmable.
ENIAC (1945–1946) ran at first by plugboard wiring, like a giant plugboard computer. Nick Metropolis & Dick Clippenger later converted it to decimal switch programming from ballistic tables. ENIAC consumed roughly 150 kW.
Internal programming became practical only when adequate storage existed. Von Neumann's role was as a consultant to Mauchly & Eckert; internal programming was discussed among the team before von Neumann's involvement, though his widely-circulated (but never formally published) reports spread the concepts.
Orders of Magnitude in Speed
The speed progression from hand calculator to 1990s workstation spans over ten orders of magnitude in roughly 50 years.
Where Hardware Cannot Go
Hamming presented three physical limits that constrain how fast computers can ultimately run. These were not engineering challenges that ingenuity would overcome — they were laws of physics.
Limit 1: Molecular Size
Components cannot shrink below atomic dimensions. A transistor gate of 10 atoms wide: quantum tunneling dominates & the transistor ceases to switch reliably. By 1993 Hamming estimated that going below ~100,000 atoms distance between interconnected devices (roughly 3 picoseconds of light travel time) represented a real physical boundary.
Limit 2: Speed of Light
Signals propagate at most at c = 3×10⁸ m/s in vacuum (less in wire, roughly 2×10⁸ m/s). A 1 GHz clock cycle is 1 nanosecond; in 1 ns, light travels 30 cm. A chip that must send a signal across 15 cm & receive a reply within one clock cycle is operating near the light-speed limit.
As clock speeds increase, chip dimensions must shrink to keep signal round-trip time below one clock period.
Limit 3: Heat Dissipation
More components per unit area + faster switching = more power per unit area = more heat. Heat must dissipate or components melt. By 1993, operating voltages were dropping toward 2-3V to reduce switching power. Diamond layers as heat conductors were under investigation. Reversible computing (thermodynamically lossless) existed only in theory.
These three limits collectively explain why single-processor speed gains were approaching saturation by the 1990s & why interest in parallel architectures grew.
Applying the Light-Speed Limit
A CPU runs at 3 GHz. One clock cycle = 1/3 ns ≈ 0.333 ns. Signal speed in copper: ~2/3 c ≈ 2×10⁸ m/s.
The Experts Who Were Spectacularly Wrong
Hamming noted one of the most celebrated prediction failures in technology history: experts in the late 1940s estimated the world would need at most three to five computers. Tom Watson of IBM reportedly said two.
By 1993, there were millions of computers in operation.
Why Experts Failed
The experts extrapolated from current use cases: scientific calculation at national laboratories. They did not anticipate the equivalent product insight: computers would not do what humans already did, faster. Computers would enable entirely new categories of work that had no prior existence.
The failure pattern: experts in a mature technology are most confident about its limits & most likely to be wrong about its future applications. Their expertise gives them accurate models of the present; it does not give them models of what will become possible.
Parallel Architectures
Single-processor speed approached saturation by the 1990s. The industry response: multiple arithmetic units, pipelines, cache hierarchies, & massively parallel machines. By 1993 there was no single dominant parallel architecture — many competing designs with different tradeoffs & different programming models. Hamming noted this fragmentation as a problem: without a standard, programming effort gets divided across incompatible systems.
Why Experts Predict Wrong
Hamming treated the 3-5 computer prediction not as a curiosity but as a lesson about the limits of expert knowledge. Experts model the present well; they fail on applications that do not yet exist.
Los Alamos Data & the Growth Equation
Hamming cited a chart compiled by Los Alamos National Laboratory (LANL) tracking the speed of the fastest computer on the market at each point in time. The data fit an exponential equation: speed approximately doubled every 18 months — later popularized as Moore's Law for transistor count.
The LANL equation: speed(t) = speed₀ × 10^(bt), where b ≈ 0.09 per year (doubling roughly every 3.3 years in operations per second, distinct from transistor count doubling).
Hamming used this to make a point about extrapolation: the exponential cannot continue indefinitely. The three physical limits set a ceiling. When does the exponential hit the ceiling? That transition marks the end of the single-processor era.
By 1993 the industry was already approaching that ceiling, driving interest in parallel architectures, pipeline tricks, & cache hierarchies — all small steps toward parallelism rather than faster single-threaded execution.