When you visit the History of Computer Chess exhibit at the Computer History Museum in Mountain View, California, the first machine you see is "The Turk."

In 1770, a Hungarian engineer and diplomat named Wolfgang von Kempelen presented a remarkable invention to the court of Maria Theresa, ruler of Hungary and Austria. It consisted of a mechanical figure dressed in (what Europeans saw as) Oriental garb, presiding over a cabinet upon which a chess board sat. Full of gears ostentatiously placed in a front side drawer, The Turk was cranked up by hand, after which an opponent could sit down and play a game against the dummy.

"Even among the skeptics who insisted it was a trick, there was disagreement about how the automaton worked, leading to a series of claims and counterclaims," writes author Tom Standage. "Did it rely on mechanical trickery, magnetism, or sleight of hand? Was there a dwarf, or a small child, or a legless man hidden inside it?"

Well, all of the above—or below, actually. In the rear bottom interior of the box sat a flesh-and-blood operative (by necessity a small one) who followed the human contender's moves from below and maneuvered The Turk's right hand across the table board. Nonetheless, the machine became "the most famous automaton in history," Standage notes, commented on by Charles Babbage, Edgar Allan Poe, Benjamin Franklin, and Napoleon Bonaparte.

More importantly, The Turk whetted the West's appetite for real devices that could do such things. Over two centuries later, this project culminated in Deep Blue—the IBM computer that bested Russian chess champion Gary Kasparov in 1997.

But what's most fascinating about "Mastering the Game," the Computer History Museum's computer chess exhibit, is that it frames the rise of the automated chess playing as a debate between two philosophies of computing. One emphasized the "brute force" approach, taking advantage of algorithmic power offered by ever more powerful processors available to programmers after the Second World War. The other has foregrounded the importance of teaching chess computers to select strategies and even to learn from experience—in other words, to play more like humans.

"Make a plan"

The Second World War saw breathtaking innovation in mechanical calculation engines both in Britain and the United States. Code-breaking machines like Britain's Colossus and trajectory calculators like Harvard's Mark I gave theoreticians a new sense of the possible.

In 1947, Alan Turing wrote a programming manual for the Ferranti Mark I computer. We know that computer chess was already on the famous cryptographer's mind by the first principle he offered to budding programmers. "Make a plan," Turing counseled. "This rather boring piece of advice is often offered in identical words to the beginner in chess."

True to his hint, Turing subsequently designed what is regarded as the first program to play the game. There was no machine capable of interpreting his code, however, so Turing went through the program's algorithms with his friend Alick Glennie.

Turing played the computer's role, moving in accordance with his algorithms. It was a pretty good game, actually, lasting for 30 moves with no serious blunders until Glennie pinned down Turing-the-computer's queen. At that point, his algorithm (or perhaps Turing) resigned.

Meanwhile, United States mathematician Claude Shannon wrote a 1950 paper that foresaw the prospects for automated chess. "The thesis we will develop is that modern general purpose computers can be used to play a tolerably good game of chess by the use of a suitable computing routine or 'program,'" Shannon wrote.

He outlined two possible strategies for these programs. His "Type A" program exhaustively explored all possibilities three moves ahead via ten subprograms, named T0 through T9.

T0 - Makes move (a, b, c) in position P to obtain the resulting position.

T1 - Makes a list of the possible moves of a pawn at square (x, y) in position P.

T2, ..., T6 - Similarly for other types of pieces: knight, bishop, rook, queen and king.

T7 - Makes list of all possible moves in a given position.

T8 - Calculates the evaluating function f(P) for a given position P.

T9 - Master program; performs maximizing and minimizing calculation to determine proper move.

This strategy would later be dubbed "minimax lookahead." The problem with the technique, Shannon observed, was that it would produce a very slow chess machine that completed its half of a 40 move game in about... ten hours. It also confirmed the misperception that human chess masters actually take every possible variation into consideration when playing.

The paper quoted the famous observation of chess expert Ruben Fine. "Very often people have the idea that masters foresee everything or nearly everything," Fine wrote in 1942. "All this is, of course, pure fantasy. The best course to follow is to note the major consequences for two moves, but try to work out forced variations as they go."

That last comment informed Shannon's "Type B" strategy for a chess program:

(1) Examine forceful variations out as far as possible and evaluate only at reasonable positions, where some quasi-stability has been established. (2) Select the variations to be explored by some process so that the machine does not waste its time in totally pointless variations.

Practically all subsequent chess programs would follow either a "Type A" or "Type B" system, according to the Computer History Museum's exhibition literature. But the best early programs moved in a Type B direction.

The Tree

The earliest post-World War II chess computers included a truncated game player developed by the English programmer Dietrich Prinz. It worked out "mate-in-two" puzzles; that is, the device located the best way to checkmate an opponent in two steps.

Not much of a breakthrough—but then in July of 1958, Chess Review announced the release of IBM mathematician Alex Bernstein's program for the IBM Digital 704.

Bernstein's program clearly followed Shannon's "Type B" route. "In order to avoid examining the consequences of all possible moves," he explained in the article, "a set of decision routines were written which select a small number (not greater than seven) of strategically good moves."

The author called this array "The Tree." The computer posited each of these "seven plausible moves," then asked itself to calculate plausible replies based on eight questions. First: was the king in check? Second: could material be lost, gained, or exchanged? Third: was castling possible? And so on.

"Were the machine to have a larger memory, more questions could be asked," Bernstein conceded. The machine could be bested by advanced beginning players, but Shannon's Type B strategy was further supported that year by Allen Newell and Herbert Simon at Carnegie Mellon University and Cliff Shaw at the Rand Corporation. They contended that the progress of computer chess traveled parallel to the fields of artificial intelligence and heuristics—solving problems based on experience. "Alpha beta pruning" was their term for the strategy picking process:

The hope is still periodically ignited in some human breasts that a computer can be found that is fast enough, and that can be programmed cleverly enough, to play good chess by brute-force search. There is nothing known in theory about the game of chess that rules out this possibility. Empirical studies on the management of search in sizable trees with only modest results make this a much less promising direction than it was when chess was first chosen as an appropriate task for artificial intelligence. We must regard this as one of the important empirical findings of research with chess programs.

By the early 1960s, four MIT students began working on a chess computer that, by the time they graduated, could beat amateur players. It very consciously adopted the alpha beta heuristic approach. Another MIT programmer added fifty more heuristics to his program, playable on a Digital Equipment Corporation PDP-6 machine. In 1967, Richard Greenblatt's "MackHack VI" was the first to challenge a human player in a chess tournament. It earned a rating comparable to that of a competent high school student.

Deep Blue was coming... but beating a grand master was a slow process.