Tweet this!

“A Brief History of Computer Chess” … the machines are winning.

Chess is an old game. It likely came to the Western world from India through Persia (Iran) in the 6th century A.D. It's a game, too, of royal origin: a test of someone's intelligence and acumen on the battlefield, yet today it's everyone's game. For one, chess doesn't require expensive equipment to play. And the game has another virtue, which any player can explain: chess is fun.

In the Western world today, chess ranks among the most popular board games, and is played seemingly everywhere, by anyone: in urban parks, living rooms, schools, and well-publicized formal competitions. Children can learn to play chess at an early age, and by high school may young players reach a level of competence that approaches master-level play. Recently, some schools have begun incorporating chess play into their curricula. Chess may have a royal, and martial, origin, but in the modern world it's a popular game of wits, enjoyed by many.

Technically, chess is a two-player board game comprising a formal system: a system of discrete tokens and rules for manipulating them. (In chess, there are six token types: King, Queen, Rook, Knight, Bishop, and Pawn. Each player begins the game with sixteen tokens selected from these types.) In games-theory parlance, chess is a zero-sum, perfect information game. Zero-sum means that player A's successful move is to player B's detriment (a good move for A is a bad consequence for B), and “perfect information” means that all positions for each player are equally visible: the entire game at each step of play is perfectly visible to each player.

Chess also has what's called a “Markov” property, meaning that prior moves are unnecessary to understand how to play the next move. In principle, each discrete arrangement on the chess board can be viewed sui generis, and the next move can be determined by inspection of the current arrangement of game pieces. In computer science terms, chess is combinatorial, too: each successive move generates a (typically large) combination of possibilities. And, further, chess is a bounded branch problem in terms of search: computer scientists view chess as a large set of branching possibilities, bounded by poor moves on the “bottom” and good moves on the “top.” The boundedness of chess lends itself to shortcuts, or “pruning,” where additional possibilities can be ignored once one determines that a particular path along a branch is already poor, relative to another branch.

For all the games theory analysis, though, human players see chess in terms of tactics and strategy: thought. The ability for a human player to win at chess requires some degree of intellectual skill. One must “see” what is happening---what the opponent moves signify in terms of a specific tactic, often part of a long-term strategy for victory. Still, the game at root remains entirely determined by the position of its game pieces, and the rules telling each player how they may be moved next. So while people play chess as a game of human thought, it can also be analyzed by computer programs as a formal system---a system with rules, but not requiring any “meaning” or additional interpretation. This formal quality of chess invites discussions about automation, and questions of whether a machine could intelligently manipulate chess pieces on a board according to the rules. This question---whether a machine could really play chess---arose early in chess's long history, at least in the Western world. Our story of computer chess begins centuries ago, then, but with a certain irony. It's to the very beginnings of the quest to automate the formal system of chess on a machine that we turn next.

The Mechanical Turk

Wolfgang von Kempelen was an early entrepreneur with an ethical dilemma. While modern entrepreneurs are sometimes accused of over-hyping their fledging technologies, they are seldom accused of outright trickery. Yet Von Kempelen may have intended his mechanical chess playing “Turk” as a joke, or an amusement for royalty---his motives aren't clear. A diplomat and inventor, he created arguably the world's first appearance of a chess playing machine---a large box upon which a chess board sat, the chess pieces moving from some unknown force in response to a human opponent, seemingly by the mysterious machinery hidden in the box beneath the board. This was the famous Mechanical Turk---the world's first, self-proclaimed chess playing machine. The year was 1770.

Kempelen showcased his Turk to the Empress Maria Theresa of Austria-Hungary. Mystified by the seemingly intelligent play of the mechanical box, questions of whether the Turk was truly “thinking” boosted the fame of Kempelen and his Turk, and it quickly became a sensation in Europe. Napoleon Bonaparte, Benjamin Franklin, and the great mathematician (and one of the progenitors of modern computation) Charles Babbage all received viewings of the Turk in action.

Von Kempelen's fake was eventually exposed, but the quest to build a chess playing machine had begun. And more generally, the fascination with building intelligent machines ramped up in the next century. In what came to be called automata, or literally self-guided machines, inventors of the 19th century built machines to write letters, sing songs, and even play musical instruments.

The construction and analysis of automata in the 1800s fit naturally with a new worldview that had swept the Western world on the heels of a series of striking scientific achievements. First Copernicus unseated Ptolemaic astronomy---a geocentric or Earth-centered system that had survived over a thousand years—, by showing how the Earth revolved around the Sun. Then Galileo further severed the modern mind from the old, Aristotelian-inspired order by explaining physics on the Earth in terms of mathematics devoid of purpose. And it was Newton and his genius that gave us, finally, the full Enlightenment picture of the world, and our place in it. “God said let Newton be,” as the poet Alexander Pope put it, capturing the ethos of the Enlightenment. The world was a machine, analyzable by reason.

Automata fit this Enlightenment picture, where nothing was really special about humans or the human mind---all was ultimately mechanics, forces in motion. Increasingly, it seemed that whatever people could do with their minds, machines or automata could do with pure mechanism, properly designed. And chess, as we've noted, lent itself well to this quest for automation, by presenting to the designer a well understood, discrete system governed by known and specifiable rules. Early on, inventors wanted to “crack” the secret of playing chess with the logic of machines.

Yet automated chess, perhaps ironically, wouldn't emerge from the Enlightenment thinking at all. Computer chess finally emerged in an entirely different world, made possible by yet other striking discoveries: Einstein's theories of relativity, and his explanation of the quanta (leading to quantum mechanics) in the early 20th century changed the scientific world yet again. It was in this milieu, not the mechanism of the Enlightenment with its metaphor of the world as a clockwork machine, that the first true chess playing computers were built.

The electronic computer gave us a new metaphor: the machine as a brain (not a clock). And it was in this context that computer pioneers like Alan Turing, John Von Neumann, Claude Shannon and others finally succeeded in building a real mechanical “Turk.” This time, there was no person in side at all, but rather the mind of the electronic computer. It's to the electronic computer in the 20th century that we now turn.

As the story goes, Turing first explained the theory of computation using sketches of simple machines, called “Turing Machines,” and from this blueprint Turing and others in England, and Von Neumann and others in the United States, then built the original electronic computers. Turing's original impetus for designing the computer was for winning at war (much like the impetus for chess): the Nazi war machine used a seemingly unbreakable code, known as Enigma, to communicate about the locations of targets for their U-boats, and England was losing badly. Turing's job was to break the Enigma code; his method was to first build a calculating machine, and then have it break the code. And it worked.

Tweet this!

The answer to the question of machine intelligence has always been tied a machine's ability to play #chess.

Von Neumann, too, at the now-famous Institute for Advanced Study in Princeton, had helped develop an electronic computer (first the ENIAC, then the MANIAC) used for another purpose central to the war effort: analyzing nuclear explosions. It was in this context that Turing, Von Neumann, and Shannon posed an ancient question in a now modern guise, in what came to be called “Artificial Intelligence” in the coming decade: can a machine be made to think like a person? And the answer to the question---the question of machine intelligence---was from the start tied to the question of whether a machine could be made to play chess.

Turing began the investigation of chess playing computers with a system written out with paper and pencil, where he played the role of the machine. Later Shannon extended Turing's work in a 1949 paper, explaining about his interest in chess that: “Although of no practical importance, the question is of theoretical interest, and it is hoped that…this problem will act as a wedge in attacking other problems---of greater significance.” As became clear in later writing by the two computer pioneers, “greater significance” was no less than the quest to “build a brain,” as Turing had put it. The quest for Artificial Intelligence, then, began with the question of whether a computer could play chess. Could it?

In 1951, Turing's colleague, Dietrich Prinz, wrote the first actual, automated chess playing program. Prinz's program ran on the new Ferranti Mark I computer, housed at Manchester University, and though it couldn't play a complete game of chess due to memory and computational, it successfully played the “mate-in-two” problem: find the best move given that you are two moves from checkmate. Seven years later, an IBM researcher named Alex Bernstein wrote the first complete, fully automated chess playing program, running on the new IBM 704 mainframe.

Artificial Intelligence as a research program itself was officially launched in 1956, at the now-famous Dartmouth Conference; two years later, seemingly the first major success for the fledgling field of AI was already in the books. Electronic computers could be made to play chess. This was 1958.

Hardware, at this time, was a major limiting factor; yet the algorithm for playing chess was already known by the 1950s, though updates and improvements would follow in coming decades. John Von Neumann, the genius at Princeton's IAS (where Einstein also researched) who recognized the immense value of the new electronic computers for analyzing nuclear bombs, weather, and for automating military operations like anti-aircraft gunning, also anticipated the algorithm that would prove so successful in playing chess: the MiniMax algorithm.

MiniMax, so-named because it sought to maximize one player's score while minimizing the opponent's score, became the essential tool in the AI scientist's toolbox for building chess playing computers. By the end of the 1950s, the core MiniMax algorithm was augmented with heuristics---rules of thumb for playing chess---gleaned from examination of human players' tactics and strategies. Here, AI pioneers Allen Newell and Herbert Simon of Carnegie Mellon led the way. Their NSS (Newell, Simon, and Shaw) program combined the MiniMax optimization routine with a technique known as alpha-beta pruning for speeding up search by ignoring known bad moves. And NSS included a growing set of heuristics that enhanced its base search strategy, further improving the performance of the MiniMax algorithm.

Full Chess Playing Computers

By the 1960s, AI had produced a program that could apparently do what required a fake---a hidden human player---before: it could play real, actual, automated chess against a human opponent. This era of AI was later described as “Cognitive Simulation,” because of the specific focus on developing algorithms that could simulate some narrow aspect of human cognition (like playing chess), rather than general intelligence. And these narrow successes at specified targets like game playing also led to a now well-known and notorious feature of AI: hype.

Indeed, Simon remarked as early as 1965 that “machines will be capable, within 20 years, of doing any work a man can do.” For AI generally, such prognostications proved premature. Yet while it would take nearly four decades for a computer to beat the best human player at chess, unlike AI in general, the quest to improve chess playing computers showed slow but sure progress in the years ahead.

By 1962, a program designed by MIT students could beat amateur chess players. By 1967, MIT programmer Richard Greenblatt, who was himself an accomplished chess player, added a number of powerful heuristics to the earlier MIT systems and achieved the unprecedented score of 1400 at a chess playing competition. A score of 1400 was the level of a very good high school player, and was a significant milestone for chess playing programs. Computer chess was getting good.

Scoring Chess

Computer chess play, like human play, is based on a scoring system called “Elo,” after the Hungarian-born American physics professor, Arpad Elo. Elo's system has been extensively adopted in multiplayer games generally, including application to video game play, American football, basketball, and other sports where opposing sides or players can be scored according to the system.

Elo's system is based on the assumption that two players with equal ratings should, over the long run, win an equal number of games. It is thus an application of the ubiquitous “law of large numbers” known to statisticians, where independent trials or “counts” of some event converge to the expected or ideal value as trials are repeated. For instance, surprising results may at first appear in a standard head-or-tails coin toss (say, five heads in a row), but over the long haul, we expect an unbiased coin to converge on fifty percent heads and tails, respectively. This is the key to the Elo system as well.

Now, what happens when one player is better than another player? Here, we expect the better player to win more games over time (or right off the bat), and this is exactly what the Elo system attempts to score. A player with an Elo score of 100 points greater than the player's opponent is expected to win 64% of the games; 200 points pushes the expectation to 76%. This distribution is used to assign points to each player after a match or a sequence of matches. If a higher scored player wins the game (as expected) the player earns a few additional points from the lower scored player.

On the other hand, if the lower scored player wins against a supposedly superior opponent, then much more points are transferred from the higher player's score to the lower scored player. In cases of a draw, the lower scored player picks up a few points from the higher scored player as well (which makes sense). The Elo score is thus established by playing chess against opponents; the highest Elo scores represent the players who can beat everyone else, including other very good players. This is the essence of how chess is scored---including computer chess systems. (See Figures 1 and 2 for year-by-year Elo scores of computer systems.)

Figure 1 (source: Chess.com)



Figure 2 (source: praxtime.com)

Computer Chess Gets Good – The 1970s

In the 1970s, chess playing programs continued to improve with a combination of increasingly powerful hardware and better software. Effective heuristics were added, and innovative improvements of search, like “iterative deepening” were also used to augment existing programs. Iterative deepening gradually increased the depth of search by MiniMax, rather than setting a fixed depth. The technique allowed chess programs to optimize their search strategies each turn, with the limited time available while the game was underway.

Hardware, too, benefitted from two developments. One, Moore's Law had increased hardware speed generally, as every two years or so computer systems were twice as powerful and twice as fast: an amazing result that seemed to justify much of the excitement and futuristic hype about AI. Second, researchers were beginning to design special purpose hardware that was optimized for searching the internal computer representation of the game of chess (known as a tree). This latter development proved enormously useful, eventually making it possible for special purpose chess playing machines to compete and then out-perform much more expensive and powerful mainframes and supercomputers. Researchers in the 1970s began narrowing their focus to design specific, special purpose chess playing systems using both customized software and hardware. And computer chess got even better.

Against human players, the new systems were beginning to threaten even master level players. In 1968 International Master David Levy famously bet AI researcher John McCarthy that no chess playing computer could beat him in the next decade. By 1978, chess playing programs were so good that to decide the outcome of Levy's bet an actual match was scheduled---who knew what would happen? Levy did win, beating the top program, CHESS 4.7 at the Canadian National Exhibition in Toronto that year. It was a victory for “humanity,” as Levy had perhaps only partly joked at the time. It was also to prove short-lived, as chess continued its inexorable progress towards Grand Master level play in the 1980s.

The 1980s and Personal Computing

The 1980s ushered in the era of personal computers. The Apple II, TRS-80, and Commodore PET all arrived on the personal computer market, and amateur programmers began experimenting with and playing against chess programs in their living rooms, on their own time. Chess playing systems were also sold, stand-alone, to the public. The Chess Challenger from Fidelity Electronics played only amateur level chess but was a popular consumer product of the time. No one was quite sure what computers could do ultimately, but by the 1980s it was obvious that they could play chess---and pretty well.

The World Microcomputer Chess Championships (WMCCC) launched in 1980, and hosted the first official computer-only chess competitions. And perhaps the best harbinger of what was to come was the Fredkin Prize, offering a $100,000 award for any computer program that defeated a reigning World Chess Champion. The race to beat the best human was on. Carnegie Mellon researchers led the way with two systems---Hitech and ChipTest---and later Deep Thought, the first system capable of playing Grand Master level chess. Deep Thought, named after the quirky computer in Douglas Adams' The Hitchhiker's Guide to the Galaxy, eclipsed the Grand Master level score of 2400 in 1989. By the end of the eighties, too, the Carnegie Mellon designed systems were beating flesh-and-blood human Grand Masters. The stage was set, it seemed, for the historic moment when a machine would vanquish an expert human, at a game thought by many to signify human ingenuity and wit.

https://www.youtube.com/watch?v=NuGT_L13bQ8

History of Chess Playing Computers – 1950 to 1990

1950 -- Claude Shannon publishes "Programming a Computer for Playing Chess", one of the first papers on the problem of computer chess.

1951 -- Alan Turing is first to publish a program, developed on paper, that was capable of playing a full game of chess.

1952 -- Dietrich Prinz develops a program that solves chess problems.

1956 -- Los Alamos chess is the first program to play a chess-like game, developed by Paul Stein and Mark Wells for the MANIAC I computer.

1956 -- John McCarthy invents the alpha-beta search algorithm.

1957 -- The first programs that can play a full game of chess are developed, one by Alex Bernstein and one by Russian programmers using a BESM.

1958 -- NSS becomes the first chess program to use the alpha-beta search algorithm.

1962 -- The first program to play credibly, Kotok-McCarthy, is published at MIT.

1963 -- Grandmaster David Bronstein defeats an M-20 running an early chess program.

1966–67 -- The first chess match between computer programs is played. Moscow Institute for Theoretical and Experimental Physics (ITEP) defeats Kotok-McCarthy at Stanford University by telegraph over nine months.

1967 -- Mac Hack Six, by Richard Greenblatt et al. introduces transposition tables and becomes the first program to defeat a person in tournament play chessville.

1968 -- David Levy makes a bet with AI researchers that no computer program would win a chess match against him within 10 years.

1970 -- The first year of the ACM North American Computer Chess Championships.

1974 -- Kaissa wins the first World Computer Chess Championship.

1977 -- The first microcomputer chess playing machines, CHESS CHALLENGER and BORIS, were created.

1977 -- The International Computer Chess Association is established.

1977 -- Chess 4.6 becomes the first chess computer to be successful at a major chess tournament.

1978 -- David Levy wins the bet made 10 years earlier, defeating Chess 4.7 in a six-game match by a score of 4½–1½. The computer's victory in game four is the first defeat of a human master in a tournament.

1980 -- The first year of the World Microcomputer Chess Championship.

1980 -- The USCF prohibits computers from competing in human tournaments except when represented by the chess systems' creators.

1980 -- The Fredkin Prize is established.

1981 -- Cray Blitz wins the Mississippi State Championship with a perfect 5–0 score and a performance rating of 2258. In round 4 it defeats Joe Sentef (2262) to become the first computer to beat a master in tournament play and the first computer to gain a master rating.

1982 -- Ken Thompson's hardware chess player Belle earns a US master title.

1983 -- David Horne releases 1K ZX Chess, which uses only 672 bytes of RAM, for the Sinclair ZX81.

1988 -- HiTech, developed by Hans Berliner and Carl Ebeling, wins a match against grandmaster Arnold Denker 3½–½.

1988 -- Deep Thought shares first place with Tony Miles in the Software Toolworks Championship, ahead of former world champion Mikhail Tal and several grandmasters including Samuel Reshevsky, Walter Browne and Mikhail Gurevich. It also defeats grandmaster Bent Larsen, making it the first computer to beat a GM in a tournament. Its rating for performance in this tournament of 2745 (USCF scale) was the highest obtained by a computer player.

1989 -- Deep Thought loses two exhibition games to Garry Kasparov, the reigning world champion. (source: Wikipedia)

Deep Blue versus Garry Kasparov

Many of us now know how this story ends. Yet it's a good story to tell. IBM purchased Deep Thought, and created a chess programming group, led by the original Carnegie Team who developed it, with the sole aim of defeating the reigning World Chess Champion, Garry Kasparov. Kasparov defeated Deep Thought handily in 1989. The rematch came in 1997, where the Deep Thought system (now renamed Deep Blue, after IBM's nickname) arrived with improved software and significant hardware improvements. Deep Blue could evaluate 200 million chess positions per second, an unprecedented level of computing power at the time.

The historic machine-versus-man showdown happened as follows. The match was held in May 1997 at the Equitable Center in New York City, in a small television studio rather than an auditorium, in order to film and broadcast the event. The studio seated about five hundred people, and was sold-out for each of the six games. It seemed that the entire world was watching. In what follows, we'll review the key details of the six games.

Game 1

Kasparov opened with a somewhat standard attack strategy, infrequently used by Grand Masters (GMs) but common in club play, known as the King's Indian Attack. He defeated Deep Blue in forty five moves, signaling that perhaps the “heavily modified” Deep Blue program was still unequal to Kasparov's chess playing abilities.

Game 2

Game 2 began with Deep Blue choosing a common attack strategy known as the Ruy Lopez opening, or Spanish Opening, after the 16th century Spanish priest Ruy Lopez Segura. Game 2, seemingly off to another victory for Kasparov, turned sour unexpectedly, however. Kasparov accused Deep Blue of “cheating” (of having knowledge of his moves somehow), and resigned his position to the computer. The bombshell accusation of the normally iron-willed Kasparov become the subject of a documentary titled Game Over: Kasparov and the Machine, and is perhaps one of the strangest moments in strong chess play history.

What adds to the oddity of Game 2, as well, is the evident blunder Kasparov made resigning the game, a fatalistic move later determined to be unnecessary, as Kasparov had failed to notice that he could have forced a draw: his Queen could have been moved into a position known as perpetual check, forcing the draw. Kasparov, literally the greatest human chess player the world had ever seen (by many estimates), had inexplicably gotten flustered by a machine, resigning his position rather than noticing the opportunity for draw, and to boot, he somewhat petulantly complained that IBM was cheating. We move now to Game 3.

Game 3

In Game 3, Kasparov chose an irregular opening, known as the Mieses Opening, reasoning that the computer would not immediately recognize the opening, which would force it out of what is known as its “opening book”---it's stock of expected opening moves which the IBM designers had prepared specific strategies for in advance. In spite of this strategy by an apparently recovered Kasparov, Game 3 ended in a draw.

Game 4

In Game 4, Kasparov played the common Caro-Kann defense against the opening attack of Deep Blue. Kasparov made a number of hurried moves early on in Game 4 using this defense, and later got into time trouble as the game drew to a close. Deep Blue won.

Game 5

Kasparov opened Game 5 with the King's Indian Attack again, and played solidly in Game 5. Yet Deep Blue played an impressive endgame, forcing a draw, when it looked early on that Kasparov would win. Game 5, though it was only a draw, was a pivotal game because it showed yet again the almost eerie competence of the computer, against the reigning world champion. The final game, Game 6, was decisive.

Game 6

Kasparov again played the Carro-Kann defense against the machine. Then, unexpectedly, he permitted Deep Blue a Knight sacrifice that quickly dismantled his defense strategy, and the computer vanquished Kasparov in the deciding Game 6 inside of 20 moves. On May 11, 1997, Kasparov officially lost to Deep Blue.

In a sequence of statements to the press after his defeat, Kasparov did nothing to dispel the feeling many had while watching the match that he had been intimidated by the vastly improved IBM system. Kasparov was so unnerved by the play of Deep Blue, in fact, that he spoke in almost apocalyptic terms of his loss: “In certain kinds of positions, it sees so deeply that it plays like God.” To the public, too, his loss seemed to signify a deeper and perhaps darker turn of fortunes for the human race. What was next, if even the great Kasparov couldn't beat a mere machine, at an intellectual game?

On May 11, 1997, Kasparov lost to Deep Blue. Kasparov was so unnerved by the play of the improved system that he spoke in almost apocalyptic terms of his loss: “In certain kinds of positions, it sees so deeply that it plays like God.” To the public, too, his loss seemed to signify a deeper and perhaps darker turn of fortunes for the human race. What was next, if even the great Kasparov couldn't beat a mere machine, at an intellectual game?

Lessons from Deep Blue

So what really happened? Did Deep Blue “play like God?” More to the point, did it play like a human? In a word: no. In fact, the same MiniMax algorithm proposed decades ago formed the basis for the Deep Blue system that vanquished Kasparov in 1997. But what had changed, significantly, were two factors: both arguably the result of human innovation and not improvements in actual machine “smarts.” One, the analysis of the game of chess and in particular of the tactics and strategies of Grand Master players became a major focus of computer research. This was the heuristics game, begun in the 1950s and pursued relentlessly by teams of chess researchers in their quest to improve on the base MiniMax algorithm, by coding in the moves and “smarts” of good players, as shortcuts.

The Deep Blue team discovered increasingly better representations of the game of chess as a formal system---a computer program---encoding more and more of the brilliant tactics and moves used by experts into the cold logic of the machine. This only enabled the machine to check promising paths faster, and to eliminate dead-ends easier. The old MiniMax “backbone” of computer chess playing became more and more efficient, quick, and savvy in different game situations---all because of the efficiency and smarts of the human researchers. This was a triumph, all right, but for the human ingenuity of the scientists (a fact lost in the immediate emotionalism of the Deep Blue victory).

Second, advances made possible by Moore's Law enabled computers to “brute force” many aspects of chess play, looking eleven or twelve moves ahead in the game, by searching the branches, pursuing the possibilities further and further into the future, where humans (presumably) could not follow. Such advances in brute computation were a kind of “smarts,” perhaps, but more importantly they demonstrated that many intellectual pursuits previously thought the sole purview of human minds could be analyzed and attacked by other methods. Deep Blue wasn't smart like Kasparov, but it didn't have to be, it turned out. It could beat him anyway.

Modern Chess Computers

Today, the chess prowess of Deep Blue is available on our laptops, or even in our pockets, on handhelds. The seven foot tall mainframe towers that housed Deep Blue's “mind” are gone, and strong computer chess is a commonplace---thanks to computing advances made possible by Moore's Law. We are less inclined to see the “mind of God” in our pockets, perhaps. Computer chess playing, in a very real sense, has lost much of its initial ability to inspire our fears, our wonderment, and our awe.

Still, interest in modern computer chess continues. Computer chess systems, once the purview of research labs and powerhouse companies like IBM, have entered everyday life on the Internet. Notably, open-source systems have proliferated, including a number of world-class strong chess playing systems such as Stockfish. Stockfish, under the open-source GPL License, boasts an impressive 3387 Elo rating (as of May 30 2015), making it one of the strongest computer chess systems in the world, downloadable for free on computers running Linux, OS X, or Windows, as well as (yes) iOS or Android devices.

Today, too, much is known now about computer chess play, as hundreds and even thousands of prior matches have now been analyzed, and so-called endgame databases containing moves and counter-moves in various endgame positions of prior matches are also available, as well as different search and heuristic techniques proven successful in different systems over the years. Scientists like Claude Shannon and Herbert Simon, who pioneered MiniMax and Alpha-Beta Pruning to make computer chess possible at all, are still relevant today however, as modern developers continue to use the basic MiniMax framework with Alpha-Beta pruning, but powered now by hardware unthinkable in Shannon's time.

The development of new chess playing systems, too, has evolved as computer chess has become more commonplace and less the domain of expensive research programs. Developers on the Web can now build and modify chess engines and full systems using a number of de facto standards, for one. Portable Game Notation (PGN) is used to read and write game moves in memory; and individual positions are represented (for reading and writing) in a notation known as the Forsyth-Edwards Notation (FEN). Chess programs today have adopted algebraic chess notation, a standard that facilitates development, sharing, and modification among chess engines and systems.

Search Strategies

Search---searching the chess game tree---remains perhaps the most important and fundamental aspect of modern computer chess. We can take a brief look at the key issues with chess search in what follows, beginning with Shannon's original Type A/Type B distinction.

Claude Shannon's 1950's paper distinguished between “Type-A” and “Type-B” search strategies for playing computer chess, and the distinction is still relevant today. Type-A search employed a brute-force algorithm that searched all positions equally out to a horizon of computability.

Shannon thought Type-A strategies were impracticable for computational complexity reasons: there are approximately thirty moves possible for each position, so searching three moves ahead for both sides (each side's move is a ply), would take an estimated sixteen minutes, even assuming a system that could evaluate a million positions a second (Deep Blue could evaluate 200 million positions a second). But perhaps more importantly, Shannon noted that the Type-A strategies were too simplistic. They ignored, for instance, the problem of “quiescence,” so named because a smart heuristic will pursue promising moves to a greater depth, while leaving unpromising or “quiet” moves alone. Quiescent search is the heart of intelligent chess.

Shannon suggested that Type-B search strategies would use one of two approaches: either employing some type of quiescent search, or evaluating only a few known good moves for each position. The latter case involves ignoring all moves but those determined to be good (this is also called forward pruning). In practice, however, modern chess playing computers use the former approach---they weight some branches of the search tree more heavily than others, and employ a quicker cut-off for branches that become “quiet.”

Modern chess playing computers also use the brute-force Type-A technique to a much greater extent than Shannon envisioned, due no doubt to the consistent exponential increases in computer power and resources from Moore's Law. In contrast, human players use substantial Type-B strategies, including the so-called “forward pruning” strategies that ignore the evaluation of moves that don't look promising given a particular board position in play. A number of experiments involving interviews of both human chess masters and amateurs (at varying Elo scores) confirm that human chess players look at between forty to fifty positions before making a move decision. Masters, unlike amateurs however, can recognize bad moves much more quickly, and examine possible good moves in a much greater depth than amateurs. Thus, while humans in general consider comparable numbers of positions when playing chess, the master's process tends to be much more productive---and much more conducive to winning. This is the essence of the Type-B strategy: home in on the known good moves, and ignore everything else.

With computer chess, however, the pattern recognition capabilities to ignore, up-front, much of the branching game tree has proven to be a much harder problem than simply exploring the game tree out to a significant depth, in tandem with the large spate of heuristics and strategies accumulated over the years in chess play. Modern chess uses brute force search, in other words, made more effective by the provision of powerful heuristics, gleaned from expert knowledge coded into modern systems from human players. Shannon's analysis remains relevant today, in other words, but the Type-A searches have proven more useful as the power of modern computers continues to skyrocket, relative to the early systems. The total number of possible positions on a chess board may be gargantuan in totality (10120), but from any given position, a brute force search of many ply (one opponent's moves) is now possible.

Still, systems today employ variations of Shannon's Type-A or B search strategies, and at least some modern chess systems do not employ major brute force strategies (Pocket Fritz 4, for instance, runs on a hand-held device, plays world class level chess, and is said to evaluate about 20,000 positions per second). Endgame databases providing a rich repository of moves and strategies, as noted earlier, are used extensively in modern systems. Combined with the power of modern hardware, chess playing systems consistently outrank---and outplay---master level human players.

As modern systems eclipse their human competitors, computer chess has assumed a more complimentary role (to be discussed ahead). Players interested in learning to play chess by playing against computer chess systems, or playing other human players online, have a number of relatively cheap and world-class options available today. Available software includes the following:

(1) Chess game viewers. Players can view previous games on their computers for pedagogical purposes or general interest, using either existing chess software running on PCs or handhelds, or special purpose software dedicated to viewing and analyzing games.

Players can view previous games on their computers for pedagogical purposes or general interest, using either existing chess software running on PCs or handhelds, or special purpose software dedicated to viewing and analyzing games. (2) Chess instruction software. Teaches the game of chess to amateur players, and improves the game of more advanced players.

Teaches the game of chess to amateur players, and improves the game of more advanced players. (3) Chess databases. Players and others interested in chess can search and view historical games in large repositories available online.

Players and others interested in chess can search and view historical games in large repositories available online. (4) Internet chess servers. ICS's facilitate game play, discussion about games, and viewing of previously played games in a massive multiplayer networked environment online. (see the end section of this article for resources for ICS's, as well as other software offerings mentioned in this list.)

Modern Chess Computing Since Deep Blue

Since Deep Blue's historic victory over Kasparov, a number of high profile man-versus-machine matchups have taken place. In the years after the 1997 showdown, Kasparov himself played strong chess computer systems in major tournaments (though Kasparov requested a rematch with Deep Blue, IBM denied the request, and the machine was retired, where it now sits in the Smithsonian Institute). In recent years, interest has waned in individual matchups between humans and computer chess systems, as the likelihood of a Grand Master beating modern systems continues to decline. As computer science professor Marty Newborn of McGill University remarked to the New York Times on the subject as early as 2006, “I don't know what one could get out of it at this point. The science is done.” Still, a few major matches of note captured the public's interest, and we'll briefly review them here.

Vladimir Kramnik defeated Garry Kasparov in 2000 to become the Classical World Chess champion, and after defeating a strong opponent in Vaselin Topalov, the FIDE World Champion, became the undisputed World Chess Champion from 2006 to 2007. In 2006, Kramnik played the computer chess program Deep Fritz, running on two Intel Core Duo CPUs, losing the match 2-4 with two losses and four draws. Kramnik received a copy of the Fritz program for preparation. The final version of Deep Fritz included an updated play book; a 5-piece endgame “tablebase” restriction was in place (6-piece endgame tablebases were widely available at the time.) Deep Fritz was not allowed to be modified during the match.

Nonetheless, the first game ended in a draw. Deep Fritz won the second game by exploiting a foible by Kramnik, which became notorious when Grand Master Susan Polger referred to Kramnik's second game mistake as the “blunder of the century” (he overlooked what is known as a mate in one). The third, fourth, and fifth games all ended in draws. The final game went to Deep Fritz, winning the match.

In 2006 also, during the reunification match against Topalov, Kramnik became the center of what came to be called the “bathroom controversy,” when Topalov's manager began complaining that Kramnik was excusing himself to his private bathroom too often during the fourth game of the match. Speculation ensued that Kramnik was using computer assistance while in the bathroom. Topalov's manager later commented that Kramnik's moves in Game 4 corresponded to what the computer chess program Fritz 9 would make in 78% of the time. Kramnik vehemently denied these allegations and countered that Topalov's play in another match (San Luis) corresponded to recommended computer moves at even higher percentages.

The Kramnik “bathroom” controversy only underscores the dominance of computer chess in the last decade, perhaps since Kasparov's loss to Deep Blue in 1997. Kasparov himself participated in two high-profile matches against computers in 2003, both resulting in draws. In the first, Kasparov played a six game match against Deep Junior, a program designed by Israeli programmers and using an opening book (a playbook of initial moves) put together in part by Grand Master Boris Alterman. Deep Junior, dubbed only “Junior” when not using multiple processors, also used a technique known as “opponent modeling,” where it plays moves that may not be optimal on objective grounds, but are sound given the specific techniques or weaknesses of specific opponents.

Kasparov played the computer chess program X3D Fritz, running on four Intel Pentium 4Xeon CPUS, each at 2.8 GHz. This four game match ended in a draw, as well.

By 2004, it was becoming clear that even the strongest human players were no match for the power of chess playing computers armed with the latest hardware and memory. A variation of the man-versus-machine matchups using teams of humans and computer systems was also endorsed briefly, resulting in a highly publicized team match comprised of Grand Masters Vaselin Topalov, Russlan Ponomariov, and Sergey Karjakin against the programs Hydra, Deep Junior, and Fritz. The computers won easily, 8 ½--3 ½, though the trio of Grand Masters had an impressive average Elo score of 2681.

By 2007, a newer and even more potent computer chess system was developed by International Master Vasik Rajlich, winning nearly every major contest from 2007 – 2010. The Swedish Chess Computer Association (SSDF) lists a multiprocessor version of Rybka with an Elo rating of 3209, hundreds of points higher than Grand Master human players. The International Computer Games Association concluded that the Rybka program had been plagiarized from other chess playing programs and thus failed to meet the originality requirements for computer chess competitions, a decision that disqualified Rajlick and his Rybka system from World Computer Chess Championships (WCCC) from 2006-2010. While Rybka and in particular Deep Rybka still represents a high-point of computer chess play today, other systems such as Komodo, with an incredible Elo of 3304, as well as the open-source Stockfish program, have reached or even surpassed Rybka's impressive level of play.

Today, too, championship-level chess can be played on a mobile device. Pocket Fritz 4, running on an HTC Touch HD mobile phone, won the Copa Mercosur tournament in Buenos Aires in 2009, with a strong Elo rating of 2898. The master level play of computer chess programs running on mobile phones is perhaps the definitive statement that computer chess playing has eclipsed even the strongest human players in recent years.



History of Chess Playing Computers – 1991 to Present

(source: Wikipedia)

2002 -- Vladimir Kramnik draws an eight-game match against Deep Fritz.

2003 -- Kasparov draws a six-game match against Deep Junior.

2003 -- Kasparov draws a four-game match against X3D Fritz.

2004 -- a team of computers (Hydra, Deep Junior and Fritz), wins 8½–3½ against a rather strong human team formed by Veselin Topalov, Ruslan Ponomariov and Sergey Karjakin, who had an average Elo rating of 2681.

2005 -- Hydra defeats Michael Adams 5½–½.

2005 -- Rybka wins the IPCCC tournament and very quickly afterwards becomes the strongest engine.

2006 -- the undisputed world champion, Vladimir Kramnik, is defeated 4–2 by Deep Fritz.

2009 -- Pocket Fritz 4 wins Copa Mercosur 9½/10.

2010 -- Before the World chess championship, Topalov prepares by sparring against the supercomputer Blue Gene with 8,192 processors capable of 500 trillion (5 × 1014) floating point operations per second.

2011 -- the controversial decision to strip Rybka of its WCCC titles was made when the ICGA concluded they had sufficient evidence of plagiarism.

“The Science is Done”---The Future of Computer Chess

As computer chess programs now routinely beat Grand Masters, we've entered a new era in the decades long fascination with chess playing computers---or rather, we've closed out the era. In the oft quoted observation by McGill computer scientist after Deep Fritz vanquished Kramnik in 2006: “the science is done.” What remains, then, of the future of chess?

Moving forward, there are a number of developments that involve the use of computer systems to enhance human play, rather than pitting man against machine. Kasparov coined the term “advanced chess” to describe a human to human matchup, where both humans have access to the recommendations of a chess playing computer program. According to Kasparov, who developed the style of play as early as 1998 (a year after he lost to Deep Blue), the play that results is superior to human-only contests, though this has never been tested. Still, it signals the changing, more inclusive way computers are now getting integrated into the game of chess.

Other developments online signal this same trend. For instance, Internet Chess Servers such as those deployed by Chess.com, Playchess.com, Chesscube and others, enable online players to play chess against each other in a networked environment. And computer chess programs are now available for free, downloadable on handhelds or laptops, and are routinely used by amateur and professional chess players to improve, learn, and review games of chess. Computer chess, it seems, has after demonstrating its superiority to human players now assumed the role of expert helpers to all chess players.

The game of chess, of course, continues, because we like to play it. Yet perhaps like simpler games like checkers, that originally inspired programmers to develop systems capable of playing against human opponents, the era of excitement about chess playing computers appears all but over. Major, funded efforts like that of IBM in the 1990s to develop the supercomputer Deep Blue, optimized specifically to play the game of chess and sparing no expense, are gone. Today, the open-source Stockfish engine is freely downloadable, and can be modified according to the popular GPL license. World class chess playing computers seem, now, almost like calculators---accessories to enhance human game play and enjoyment.

IBM, notably, has made headlines recently (2011) by developing a state-of-the-art system to play the more complicated game of Jeopardy! The Watson system, like its older cousin Deep Blue (although not using the same program at all, as one might have guessed), has vanquished the strongest Jeopardy! players in the world, including Ken Jennings. The challenge and the excitement, it seems, is perpetually shifted toward other and more difficult games.

And so the original fear and awe Deep Blue inspired in us has, perhaps, been lessened over time, by an increasing recognition that many things we do well can, in fact, have computational solutions. And yet there are no truly “smart robots” around today, even as world-class systems like Rybka, Stockfish, or Komodo vanquish the best human chess players routinely. In a sense, chess is no surprise then, just as the development of pocket calculators became unsurprising decades ago. Chess, after all, is a set of rules that don't change. In computer science parlance, chess is a “formal system” with identifiable symbols, and specified rules. That is just what---exactly what---electronic computers are designed to process.

By contrast, the system of natural language---English, French, Russian, what have you---seems vastly more complicated, more open-ended, and more human than any of the formal games we play. It's no wonder that computers have made scant progress on conversing with people, though they now routinely beat them at the rules-governed game of chess. The real lesson of computer chess, in other words, is not that computers are becoming intelligent like humans. Rather, the real lesson is that humans were smart enough---and machines became fast enough---to show that chess is actually mere computation. But for those human lovers of chess, at any rate, the game will continue, indefinitely. It's fun, after all.

Helpful Resources

For developers, the Stockfish code is available for download here. Chess players wishing to play games online can find a number of Internet Chess Server hosts, including Chess.com, the Internet Chess Club (ICC), Playchess.com, the Free Internet Chess Server (FICS), and the Free Online Chesscube (chesscube.com). Chess.com in particular is a one-stop chess resource on the Web that includes player forums, and links to a number of chess resources useful for beginners and experienced chess players alike. To learn about the history of chess, the United States Chess Federation (USCF) is an excellent resource; High Tech History (hightechhistory.com) also publishes a fairly comprehensive account of the origins and development of the game of chess. A comprehensive list of chess books is available on Wikipedia (search “List of Chess Books”) for those interested in research on chess playing, chess history, and the advent and development of computer chess. Finally, for players interested in obtaining computer chess systems for play (not development), a number of freely available systems are available for download, including Arena, Lucas Chess, and others. Chess.com, again, is an invaluable one-stop resource for exploring the different software packages available for interested chess players. Happy playing!

Last Updated: March 23, 2020