un

guest
1 / ?
back to lessons

Analog & Digital Precursors

Hamming opened his hardware history with a distinction: analog computing (slide rule, differential analyzer) versus digital computing (Napier bones, desk calculators). Both lines developed in parallel; the digital line eventually dominated.

Napier (1550–1617)

John Napier invented logarithms, which enabled the slide rule — an analog device where physical length represents logarithmic value. Addition of lengths = multiplication of numbers. Napier also designed 'Napier's bones': ivory rods for multiplication assistance. Digital, not to be confused with the slide rule.

From Schickert to Babbage

Wilhelm Schickert (1623) designed a machine for the four arithmetic operations; it burned before completion. Pascal (1623–1662) built an adding machine for tax assessment. Leibniz added multiplication & division but the machines were unreliable.

Charles Babbage (1791–1871) designed the difference engine: a machine to evaluate polynomials at equally-spaced values by successive addition, producing error-free printed tables. He never completed it; a Norwegian father-son team (Scheutz) built working versions. In 1992, a team in England built Babbage's analytical engine from his drawings — it worked.

Babbage's analytical engine was close to von Neumann architecture: a store (memory), a mill (arithmetic unit), & conditional branching. Lady Lovelace published the first programs for it.

Relay Computers to ENIAC

The modern era of computing began with relay computers in the early 1940s. Hamming traced the speed evolution:

| Era | Technology | Speed | |---|---|---| | Pre-1940 | Hand calculator | 1/20 ops/sec | | ~1940 | Relay (Bell Labs) | 1 op/sec | | 1946 | ENIAC (vacuum tube) | ~5,000 ops/sec | | 1952 | IBM 701 | ~17,000 ops/sec | | ~1993 | Modern workstation | 10⁹ ops/sec |

George Stibitz at Bell Telephone Laboratories built relay computers from parts of condemned M9 gun directors. Hamming used one for years. These relay machines were not fast — about one operation per second — but they were reliable & programmable.

ENIAC (1945–1946) ran at first by plugboard wiring, like a giant plugboard computer. Nick Metropolis & Dick Clippenger later converted it to decimal switch programming from ballistic tables. ENIAC consumed roughly 150 kW.

Internal programming became practical only when adequate storage existed. Von Neumann's role was as a consultant to Mauchly & Eckert; internal programming was discussed among the team before von Neumann's involvement, though his widely-circulated (but never formally published) reports spread the concepts.

Moore's Law Speed Curve

Orders of Magnitude in Speed

The speed progression from hand calculator to 1990s workstation spans over ten orders of magnitude in roughly 50 years.

From Hamming's speed table: hand calculator at 1/20 ops/sec to modern workstation at 10⁹ ops/sec. By what factor did computing speed increase? Express your answer as a power of 10, & explain what that order-of-magnitude jump means for what becomes computationally feasible.

Where Hardware Cannot Go

Hamming presented three physical limits that constrain how fast computers can ultimately run. These were not engineering challenges that ingenuity would overcome — they were laws of physics.

Limit 1: Molecular Size

Components cannot shrink below atomic dimensions. A transistor gate of 10 atoms wide: quantum tunneling dominates & the transistor ceases to switch reliably. By 1993 Hamming estimated that going below ~100,000 atoms distance between interconnected devices (roughly 3 picoseconds of light travel time) represented a real physical boundary.

Limit 2: Speed of Light

Signals propagate at most at c = 3×10⁸ m/s in vacuum (less in wire, roughly 2×10⁸ m/s). A 1 GHz clock cycle is 1 nanosecond; in 1 ns, light travels 30 cm. A chip that must send a signal across 15 cm & receive a reply within one clock cycle is operating near the light-speed limit.

As clock speeds increase, chip dimensions must shrink to keep signal round-trip time below one clock period.

Limit 3: Heat Dissipation

More components per unit area + faster switching = more power per unit area = more heat. Heat must dissipate or components melt. By 1993, operating voltages were dropping toward 2-3V to reduce switching power. Diamond layers as heat conductors were under investigation. Reversible computing (thermodynamically lossless) existed only in theory.

These three limits collectively explain why single-processor speed gains were approaching saturation by the 1990s & why interest in parallel architectures grew.

Applying the Light-Speed Limit

A CPU runs at 3 GHz. One clock cycle = 1/3 ns ≈ 0.333 ns. Signal speed in copper: ~2/3 c ≈ 2×10⁸ m/s.

For a 3 GHz CPU where signals travel at 2×10⁸ m/s: what is the maximum round-trip distance a signal can travel in one clock cycle? If the processor core must communicate with an L3 cache chip 5 cm away, how many clock cycles does the round-trip take? Show your work.

The Experts Who Were Spectacularly Wrong

Hamming noted one of the most celebrated prediction failures in technology history: experts in the late 1940s estimated the world would need at most three to five computers. Tom Watson of IBM reportedly said two.

By 1993, there were millions of computers in operation.

Why Experts Failed

The experts extrapolated from current use cases: scientific calculation at national laboratories. They did not anticipate the equivalent product insight: computers would not do what humans already did, faster. Computers would enable entirely new categories of work that had no prior existence.

The failure pattern: experts in a mature technology are most confident about its limits & most likely to be wrong about its future applications. Their expertise gives them accurate models of the present; it does not give them models of what will become possible.

Parallel Architectures

Single-processor speed approached saturation by the 1990s. The industry response: multiple arithmetic units, pipelines, cache hierarchies, & massively parallel machines. By 1993 there was no single dominant parallel architecture — many competing designs with different tradeoffs & different programming models. Hamming noted this fragmentation as a problem: without a standard, programming effort gets divided across incompatible systems.

Why Experts Predict Wrong

Hamming treated the 3-5 computer prediction not as a curiosity but as a lesson about the limits of expert knowledge. Experts model the present well; they fail on applications that do not yet exist.

Name a technology prediction failure from any field you know where experts confidently estimated a small upper bound on adoption or use, and were proven catastrophically wrong. What specific failure in their reasoning led to the error? Apply Hamming's insight: what would they have needed to see that their expertise prevented them from seeing?

Los Alamos Data & the Growth Equation

Hamming cited a chart compiled by Los Alamos National Laboratory (LANL) tracking the speed of the fastest computer on the market at each point in time. The data fit an exponential equation: speed approximately doubled every 18 months — later popularized as Moore's Law for transistor count.

The LANL equation: speed(t) = speed₀ × 10^(bt), where b ≈ 0.09 per year (doubling roughly every 3.3 years in operations per second, distinct from transistor count doubling).

Hamming used this to make a point about extrapolation: the exponential cannot continue indefinitely. The three physical limits set a ceiling. When does the exponential hit the ceiling? That transition marks the end of the single-processor era.

By 1993 the industry was already approaching that ceiling, driving interest in parallel architectures, pipeline tricks, & cache hierarchies — all small steps toward parallelism rather than faster single-threaded execution.

Hamming cited the LANL equation showing computing speed grew at roughly 10^(0.09t) per year (speed in ops/sec, t in years from some baseline). Starting from 1952 IBM 701 speed of ~17,000 ops/sec, predict the speed in 1993 (41 years later). Compare your answer to Hamming's stated value of roughly 10⁹ ops/sec. Do the numbers match? If they diverge, what does that tell you about the model?