Framing the Question
Hamming opens with a trap. 'Can machines think?' sounds like a simple yes-or-no question. He argues it conceals three separate problems: what counts as a machine, what counts as thinking, & whether the question even has a meaningful answer.
He spent a year trying to identify the smallest program he would believe could think. After a year of failure, he concluded he had been asking the wrong question. Maybe thinking is not a yes-or-no property. Maybe it admits of degree.
His framing shift: instead of 'can machines think,' ask 'to what extent can machines relieve life's intellectual burdens?' That question points directly at usefulness rather than at metaphysics.
The Danger of Certainty
Hamming warns that both extremes carry professional risk. Believe machines cannot think: you will underuse them & fall behind. Believe machines obviously can think: you will overestimate them & build systems that fail in ways you didn't anticipate.
Neither belief is safe. You must develop your own considered position.
The Turing Test
Alan Turing, in 1950, proposed a behavioral test: if a person at a teletype cannot distinguish a human from a suitably programmed machine, the machine 'thinks' — by definition.
Hamming respects the cleverness of the move. Turing avoids the unanswerable philosophical question about inner experience & replaces it with an observable behavior. That's good scientific instinct.
But Hamming has a structural objection: Turing's test violates the standard scientific method. Science starts with the simplest problems first, not the hardest. Turing's test is, as Hamming puts it, hard before easy. It evaluates full conversational intelligence before we understand simple learning or rule-following.
What Cannot Be Put Into Words
Hamming's central claim about machine intelligence limits: not all human knowledge can be translated into rules or instructions. Some knowledge resists formalization — not because we lack effort or intelligence, but because verbalization is not the right medium.
Expert systems in the 1980s tried to capture expertise as rule bases: IF symptom-A AND symptom-B THEN diagnosis-C. They worked in narrow, well-defined domains. They failed at the boundaries of those domains, where exactly the tacit knowledge of experienced practitioners matters most.
The chemist's intuition about which reaction will proceed, the experienced welder's feel for the right arc gap, the surgeon's sense of when tissue looks wrong — these are not simply rules waiting to be articulated. Hamming argues some may be permanently uncodifiable.
Why This Matters for Automation
Rule-based systems do very well at the center of well-defined domains. They fail at the boundary. Experienced practitioners live at the boundary. This asymmetry shapes what automation can & cannot displace.
Identifying Tacit Knowledge
Michael Polanyi's phrase for this: 'We know more than we can tell.' He used the example of face recognition — most people recognize thousands of faces instantly but cannot describe the rules they use.
Hamming connects this to expert systems: the deeper problem isn't that we haven't yet written the rules; it's that for some domains, no complete rule set exists.
4×4×4 Tic-Tac-Toe
Hamming uses 4×4×4 three-dimensional tic-tac-toe as his worked example. The board has 64 squares & 76 winning lines. Two-dimensional tic-tac-toe has a known drawing strategy; it demonstrates no interesting intelligence. The 4×4×4 version is hard enough to require genuine heuristics.
The Program's Structure
Step 1: enumerate legal moves.
Step 2: favor 'hot spots' — corners & centers have more winning lines through them than edge or face-center squares. Use the cube's center-corner duality: there exists an inversion of the cube sending corners to centers & centers to corners while preserving all 76 lines.
Step 3: play randomly among roughly equivalent moves. Deterministic play lets a patient opponent map your strategy & find exploits. Randomness prevents systematic exploitation.
Step 4: apply sequential rules in priority order.
The sequential rules, in priority order: (1) win if a winning move exists; (2) block opponent's winning move; (3) take a fork if available; (4) block opponent's fork; (5) make forcing moves; (6) fall back to heuristics.
Notice the structure: deterministic rules cover the clearly-defined situations. Heuristics handle everything else. The line between them is exactly where tacit knowledge begins.
Samuel's Checker-Playing Program
Arthur Samuel at IBM wrote a checker-playing program that became famous for beating a state champion. What made it notable: it used a learning mechanism. Samuel parameterized the evaluation function (weighting board control, king advantage, mobility, piece pinning, etc.), then let two copies of the program play each other with slightly different parameter settings. The better-performing version survived.
This is parameter search, not rule discovery — but it constitutes learning in a meaningful sense. The program improved without the programmer explicitly writing better rules.
Hamming asks: did Samuel's program show originality when it made surprising moves? You cannot prove it did, but you equally cannot prove you have originality in any sense that excludes the checker program.
Expert Systems & Their Limits
Expert systems of the 1970s-80s represented the most systematic attempt to formalize professional knowledge as machine-executable rules. Medical diagnosis, geological survey, financial analysis, circuit design — each domain attracted its own rule-based system.
Performance in well-defined sub-problems was often impressive. But the systems shared a failure mode: they worked until the problem stepped outside the rule coverage, then failed completely. Human experts degrade gracefully; rule systems fall off a cliff.
The root cause Hamming identifies: the people building expert systems had no systematic way to know what was missing. The experts could articulate their rules — but experts use their uncodifiable judgment to decide which rules apply in a given situation. That meta-level judgment never made it into the system.
The Right Question for Your Career
The practically useful question isn't 'can machines think?' It is: for any task in your field, does the critical difficulty live at the rule-covered center, or at the boundary where judgment & tacit knowledge operate? Automation reliably handles the former. It struggles with the latter. Your career strategy follows from knowing the difference.