Three Phases of Computer Application
Hamming's Chapter 5 opens with a retrospective: his 30-year series of talks at IBM customer training events forced him to understand trends rather than just facts. Preparing the same talk repeatedly required him to stay ahead of the field, not just current with it.
He identified three successive phases in how computers were applied:
Phase 1: Hardware limits (Chapter 3). Early computing was constrained by what the machine could do — memory was scarce, cycles were expensive, reliability was uncertain. Applications were chosen to fit the hardware.
Phase 2: Software limits (Chapter 4). As hardware improved, programming became the bottleneck. Applications were constrained by what could be efficiently coded.
Phase 3: Economics & applications (Chapter 5). By the late 1980s, hardware was cheap enough & software powerful enough that the question became: what should computers do? Economics & organizational capacity determined which applications got built.
This phase transition matters: each phase required entirely different skills from practitioners. A brilliant hardware engineer from Phase 1 who never updated their mental model became useless in Phase 3.
Earliest Applications
Computing began with astronomical calculation, then 'number crunching' in physics & engineering. Raymond Lull (1235–1315), a Spanish theologian, built a logic machine — the first application of computing to non-numerical reasoning. Jonathan Swift satirized it in Gulliver's Travels (the island of Laputa). Hamming traced this line from Lull through symbolic manipulation to what would become: machine learning.
The S-Curve of Technology Adoption
Every major technology follows a characteristic trajectory: slow initial adoption, rapid acceleration, saturation. Hamming named this the S-curve pattern.
Phase 1 of any technology: heroic demonstration. A small number of enthusiasts demonstrate that the technology works. Progress depends on individual brilliance & tolerance for unreliability.
Phase 2: rapid adoption. The technology becomes reliable enough for general use. Infrastructure builds around it. Standards emerge. The limiting factor shifts from technical to organizational.
Phase 3: saturation. The technology reaches full penetration of its addressable market. Further improvement yields diminishing returns. New S-curves begin for successor technologies.
For computing: Phase 1 = ENIAC era (1940s–1950s), Phase 2 = mainframe commercialization (1960s–1970s), Phase 3 = personal computing approaching saturation (1980s–1990s). Hamming was writing during the transition from Phase 2 to Phase 3 for mainframes, while personal computing was still in its Phase 2.
The equivalent product insight (first stated in Chapter 2) applies directly here: at Phase 2, successful computerization produces an equivalent job, not the same job. Organizations that tried to computerize existing workflows without redesigning them often failed or underperformed.
Locating Yourself on the S-Curve
Hamming's S-curve insight has a practical implication: the skills & strategies that succeed in Phase 1 (heroic, experimental, high-failure tolerance) differ from those needed in Phase 2 (reliable delivery, standards compliance, organizational integration) and Phase 3 (optimization, cost reduction, platform consolidation).
When Shared Data Does Not Work
Hamming told a story from his time conducting a high-level audit of Boeing's computer center. Boeing's management believed they had solved collaborative design: all engineers would write their current design state to a shared tape. Everyone would read from this single source of truth. Coordination problems would disappear.
It did not work.
The reason: when a team conducts an optimization study (varying, say, wing area & profile to minimize drag), they need a fixed baseline to measure changes against. If the shared tape updates continuously with changes from other teams, an improvement a team measures might actually reflect someone else's change inserted between their iterations — not their own design decision.
The solution teams adopted in practice: each group, when beginning an optimization study, made a snapshot copy of the current tape. They used that frozen copy throughout their study, ignoring updates. Only when satisfied with their new design did they write back — then reconciling with everyone else's changes.
Hamming's conclusion: you cannot use a continuously changing database for an optimization study. The optimization requires a stable state space; a mutable shared state introduces phantom correlations.
Data Bases
Computers were promoted as the solution to organizational data problems. Hamming was skeptical. He cited airline reservation systems as genuinely successful (the coordination problem is real, the data model is simple, & consistency is strictly required). But management information systems that promised to tell managers 'the current state of the company in real time' consistently underdelivered: the data models were too complex, data quality too poor, & interpretation too ambiguous.
Stable Baseline vs Live Data
The Boeing failure illustrates a general principle Hamming implied: optimization requires a stable cost function evaluated on a fixed state space. A shared mutable state violates the fixed-state-space requirement.
This principle extends beyond software. In any optimization process — business strategy, experimental design, model training — isolating the variable under study requires controlling all others.
Pattern Recognition as the Next Frontier
By 1993, Hamming identified pattern recognition as the major next challenge for computing. He distinguished two types:
Classical pattern recognition: comparing an input to a stored template. Face detection, OCR (optical character recognition), signature verification. These admit algorithmic solutions once the template set is defined.
Genuine recognition: a child recognizes 'chair' across thousands of different shapes, materials, sizes, & orientations, having never seen most of them before. No explicit template covers the generalization. Hamming treated this as an open problem — the gap between classical pattern matching & genuine recognition was not a matter of more data or faster hardware. It required different foundations.
He framed this in terms of the expert systems failure: researchers thought they could extract decision rules from experts & encode them in programs. Expert systems worked in narrow domains but failed in complex ones, partly because human experts use patterns they cannot articulate. The unconscious pattern library built over years of practice cannot be extracted through interviews.
Hamming's prediction (1993): genuine pattern recognition would require fundamentally different computational approaches. He gestured toward neural networks but was cautious — not convinced the then-current neural networks would close the gap.
Giving the Same Talk for 30 Years
Hamming described a practice that gave him more return than almost anything else in his professional life: giving the same talk repeatedly.
He was invited to speak at IBM customer training events in roughly 1960. He chose to give a talk on The History of Computing to the Year 2000 — a topic he was genuinely uncertain about, which forced him to develop actual views. He gave variants of that talk two or three times per year for 30 years.
The benefits he identified:
Staying current: giving the same talk repeatedly forced him to update it regularly. He could not give a stale talk without embarrassing himself in front of audiences who followed the field.
Trend recognition: the update process forced him to look for trends, not just events. What changed in the past year, and in what direction? Repeated update required a model of the field, not just a catalog of facts.
Public speaking skill: practice reduced fear & improved delivery. He stopped being frightened of giving talks; he became a polished speaker through repetition rather than talent.
Network: a consistent topic built a reputation. People associated him with computing trends. Invitations multiplied.
His observation: he could have acquired this practice through luck — but he made the luck by actively seeking speaking opportunities, then developing the discipline to use them systematically.
Deliberate Practice & Career Capital
Hamming's 30-year talk was an instance of deliberate practice applied to intellectual work: a systematic, repeated exercise with feedback cycles that built compounding skill over time.
The structure: (1) commit to a topic at the edge of your knowledge; (2) give a talk, which forces you to know it; (3) receive feedback (audience response, questions you could not answer); (4) update the talk; (5) repeat.
Each cycle adds to a model. Each update forces contact with new data. Each audience question reveals a gap. Over 30 years, the model becomes deep.
Connecting Hardware, Software & Applications
Chapters 3, 4, & 5 form a progression. Hamming built the argument across three lectures:
Chapter 3 (Hardware): physical limits constrain what machines can do. Three laws — molecular size, speed of light, heat — set ceilings that no engineering can remove.
Chapter 4 (Software): human limits constrain what programs can do. Languages designed for logical elegance fail; languages designed for human psychology survive. Abstraction layers accumulate, each solving the previous layer's pain.
Chapter 5 (Applications): economic & organizational limits constrain what gets built. Technology follows S-curves. Shared mutable state breaks optimization. Pattern recognition remains an open challenge.
The unifying theme: limits shift. The practitioner who updates their model of what the current binding constraint is — and positions their skills accordingly — consistently outperforms one who optimizes for the constraints of yesterday.
Hamming's career lesson from the 30-year talk: giving the same talk repeatedly forced him to understand trends. The mechanism was not the talk itself but the preparation cycle: what changed, in what direction, and why? Repeated preparation built a model that simple reading could not.
What Is the Current Binding Constraint?
In Hamming's framework, each era has a binding constraint: the limit that, if removed, would most accelerate progress. In the 1940s: hardware speed. In the 1970s: software capability. In the 1990s: economics & organizational capacity.