In the summer of 1977, Bobby Fischer was in self-imposed exile in Pasadena, California. The greatest chess player on Earth at the time, Fischer had joined an apocalyptic cult and covered the windows of his grungy apartment with tinfoil. Russian secret police and Israeli intelligence, he insisted, could spy on him through his dental fillings and influence him with radioactive signals. He hadn’t played a recorded game of chess for five years, since defeating Boris Spassky and the Soviet machine in the match of the century in Reykjavik, Iceland, capturing the world championship and becoming an American Cold War hero.

Nevertheless, gripped by paranoia and hidden away from the rest of the world, Fischer wrote letters. He sent two, never before published, to a Carnegie Mellon professor and computer scientist named Hans Berliner. “Lately I’ve been getting a little interested in the computer chess scene,” Fischer wrote in a scrawled longhand that May. “Intellectually it’s a stimulating field, and financially I think it could have a good future.” He asked for Berliner’s help getting involved.

Fischer arranged to travel incognito to Cambridge, Massachusetts, where he played three games against a computer program called Greenblatt, developed by an MIT engineer. Fischer embarrassed the machine, checkmating it thrice. In his correspondence, he criticized computer programs for making “gross blunders” and called one of them “a piece of junk.” After his triumph over the program, Fischer disappeared again, and wouldn’t play another documented game for 15 years.

But that same year, Monty Newborn, a computer chess pioneer, made a prediction: “Chess masters used to come to computer chess tournaments to laugh. Now they come to watch. Soon they will come to learn.”

For decades, the best humans were better than any machine at marquee, blue-chip intellectual games like chess in the West and Go in the East. Both games sport vibrant competitive scenes, professional circuits, voluminous scholarly study, and a kind of wide-eyed reverence from non-expert onlookers. Since at least 1950, the games have also played host to programmers who have tried to master them, enticed by besting the genius widely thought to be required of a chess or Go master.

Over the past few decades, when the two sides faced one another — bots vs. humans — the bouts were treated like heavyweight man-vs.-machine prizefights. But just this spring, one side claimed the title and hung up its gloves. The computers have finally won. There’s no going back. And few are sure where we’re headed.

Human attachment to these games, and their ability to play them, seems baked into our very marrow. When King Charles I of England was sentenced to death for treason in 1649, he brought exactly two possessions to his own beheading: a bible and a chess set. At the height of his career, Marcel Duchamp — who’s arguably behind only Picasso and Matisse in a ranking of important 20th-century artists — mysteriously moved to Argentina and carved his own chessmen out of wood. He shifted away from art and focused his attention on chess. (And he was good.) There’s a story that in 19th-century Japan, ghosts at a Go match offered a famous Go player three brilliant moves to play. His opponent, a young prodigy, was undone thanks to the phantoms, lost the game and, as the stones were being cleared, vomited blood on the board, collapsed and died soon after.

Computers have little use for our bibles or art or blood. Yet they’ve seen more deeply into games in a few decades than humans have been able to over thousands of years. The latest strikes against human dominance were launched from the London offices of an artificial intelligence subsidiary owned by Google. The subsidiary, called DeepMind, was acquired in 2014 for $400 million and is “working on some of the world’s most complex and interesting research challenges, with the ultimate goal of solving intelligence,” according to the company’s website (and my emphasis). One of those challenges was the board game Go. Significantly more complicated than chess, Go has been the brass ring of this world for decades. In 1985, a Taiwanese industrialist and Go promoter put up a $1.4 million prize for any program that beat a top human. The industrialist died in 1997 and the prize expired, unclaimed, in 2000. But DeepMind’s program, AlphaGo, is powered by cutting-edge deep neural networks and crafted by some of the most credentialed AI engineers in the business. Its first major strike came a year ago, when it dispatched with international champion Lee Sedol in a five-game match in Seoul. Its second came this past May, at a conference in Wuzhen, China, billed as the Future of Go Summit. The program bloodied the world’s top human player over and over, as well as a tag team of five other elite players teaming up against it.

AlphaGo has become the Rocky Marciano of AI gaming — the undefeated champ, out on top. After those bangs, though, it’s going out with a whimper. In late May, DeepMind quietly announced the retirement of AlphaGo from competitive play. To mark the occasion, the company published 50 games out of the countless number AlphaGo has played against itself (its only real competition, after all), and plans to publish one final research paper describing the algorithm’s efficiency and potential generalization to other problems. Otherwise, that’s it.

(A DeepMind P.R. representative told me the team was unable to comment for this story.)

We lost our collective opposable-thumb grip on chess roughly 20 years earlier, when our human representative Garry Kasparov, a world chess champion, fell in a six-game match at the hands of IBM’s supercomputer called Deep Blue. These twin pillars of intellectual competition — chess and Go — aren’t the only games that have appeared in the crosshairs of the engineers, of course. Checkers, Othello, Connect-Four, backgammon, Scrabble, shogi, Chinese chess and poker have all been the subject of serious computer scientific study. Human intelligence is no match for the artificial kind in any of them anymore. This is partially thanks to advances in the theory of AI. Partially, it’s simple hardware arithmetic. Deep Blue, sitting in an IBM lab, was the 259th fastest supercomputer in the world in 1997, capable of performing about 11 billion operations per second. The 259th fastest supercomputer today — an otherwise anonymous piece of Lenovo hardware sitting somewhere in China — is 60,000 times faster. And thanks to algorithmic improvements, the phone in your pocket could destroy Kasparov.

Seeing these games become the competitive providence of artificial intelligence has produced acute responses unique to humans — sadness, crisis, sickness, despair. Lee, after losing to AlphaGo, experienced a textbook existential crisis. “It made me question human creativity,” he said after the match. “I wondered whether the Go moves I have known were the right ones.” One commentator, upon Lee’s defeat, said he “felt physically unwell.” After losing an early game to Deep Blue in 1996, Kasparov was overcome. He went back to his hotel room, stripped to his underwear, and stared at the ceiling.

“In certain kinds of positions, the computer sees so deeply that it plays like God,” Kasparov said.

The headquarters of IBM Research and the office of Murray Campbell are inside a massive, arcing glass building an hour north of New York City. Campbell was a member of the team that created Deep Blue. As a teenager, Campbell would venture into town each day in search of a newspaper with details on the Fischer-Spassky Cold War match. Back in ’97, he sat in a leather chair and moved his supercomputer’s pieces during its defeat of Kasparov, enduring the grandmaster’s leery frowns. Campbell, a former student of Berliner’s, still works on artificial intelligence at IBM, where he has won the company chess championship the past two years.

His desk was littered with stacked files, a pile of vinyl chess boards flanked by old chips from the Deep Blue system sat in a corner, and heavy books lined two shelves. Among them: an encyclopedia of cognitive science and a volume titled “Robots Unlimited.” A cartoon pinned above his desk showed a man playing chess against a toaster: “I remember when you could only lose a chess game to a supercomputer.”

Deep Blue demonstrated to a broad public for the first time that a computer could beat a human at an intellectual task. “That was a shock,” Campbell said. “Now we’re getting sort of used to it.” But with that complacency comes uncertainty about what’s next. What could be the applications (or, at least, implications) of all this computer game-playing? What can they do beyond the game board, out in the “real world?” Campbell’s answer rang pessimistic. “With board games, it’s somewhat difficult to come up with good examples,” he said. Gerald Tesauro’s work on backgammon in the early ’90s, he added, did bring advances in reinforcement learning. And though they took a long time to pay off, they now crop up frequently in robotics. But that might be an exception hinting at a rule.

“For chess, it’s harder to see,” Campbell said. Little about reality resembles the ancient game, plum prize though it was. “There are precious few zero-sum, perfect-information, two-player games that we compete in in the real world.” And while DeepMind makes claims about AlphaGo being a generalizable AI system, sweeping in its potential applications, it may not in fact be that helpful dealing with real-world problems. Andrej Karpathy, the director of AI at Tesla, wrote recently that, “It is also still the case that AlphaGo is a narrow AI system that can play Go and that’s it.” As far as extending its Go chops to robotics, for example, “Any successful approach would look extremely different.” Campbell echoed the sentiment: “If it can do just Go, I’m not that impressed.”

But the program does represent an achievement of a different kind: “Board games are done,” Campbell said. So much so that their conquerors have been retired. In addition to AlphaGo collecting mothballs, Deep Blue has gone into deep sleep. After the Kasparov match, its processing power was reserviced for a while, put to work doing computational finance. But once its hardware became obsolete, half of it was sent to the Smithsonian and half to the Computer History Museum, where they gather dust.

None of this is to say there are no games left. Hell, you could invent a game with the express purpose of making it hard for a computer to play. (In fact, someone did. The game is called Arimaa and computers went on to dominate it, too.) But no matter what new game is found, the competitions aren’t about us anymore. The AIs are being put in the arena to make themselves smarter.

The new arenas may be squarely on the computers’ home turf. David Churchill’s University of Alberta doctoral dissertation is titled “Heuristic Search Techniques for Real-Time Strategy Games.” Put another way: He got his Ph.D. in StarCraft. His 123-page thesis, among other things, describes the quickest way for a computer to get the Protoss race to build dragoon units. I reached him recently by Skype. Churchill told me he’s doing research for DeepMind and Facebook (which also has its own Go venture), but couldn’t talk specifically about it, citing nondisclosure agreements. In any event, last fall, DeepMind and Blizzard Entertainment, StarCraft’s developer, announced a collaboration to “open up” StarCraft II to machine-learning researchers. StarCraft is the most popular real-time strategy game of all time, and has been a de facto national sport of South Korea. It’s beginning to look like the next frontier.

“StarCraft is way, way bigger than chess or Go or any of these games,” Churchill said.

But Churchill doesn’t see this next-gen game project as a chance to score another victory against us crusty old humans. “We’re not even really sure what beating a human means,” he said. StarCraft, along with many other computer games, has a physical, dexterous component. While the best humans can click a mouse and press the keys many times a second, a computer can input many thousands of commands in that time. Not exactly a fair fight …

He also doesn’t care. “The goal isn’t to beat the humans,” Churchill said. “That has no intrinsic value.” In fact, it’s almost beside the point. With the technology and know-how available today, it’d be like saying, “We have this huge hammer, and now we’re just looking for nails.”

So, what exactly is Churchill trying to do? “We’re trying to make a smarter system,” he said. “We need some sort of benchmark to say my new system is better than the old system. This is what research is. How do you judge if a system is quote-unquote smarter than another system? Traditionally, the easiest way to do this is with games.”

Every single computer scientist who works on games whom I’ve ever spoken to has uttered to me, often with a twinge of contrition, the phrase “test bed.” It’s not about the game, man, it’s about what comes next. Here’s a more or less complete list of what exactly they’ve told me all this game work has been a test bed for: airport security, antiterrorism, auctions, biological adaptation, business negotiations, cancer treatment, crime prevention, cybersecurity, diabetes treatment, DNA sequencing, finance, legal work, PTSD treatment, robots, steering evolution and warfare.

“We’re not doing research into games,” Jonathan Schaeffer, whose program conquered checkers, assured me back in 2015.

Oh, right, I forgot. Even if most of these would-be applications remain un-checked-off on the whiteboard, and even if IBM’s Watson hasn’t revolutionized health care and Google’s DeepMind hasn’t “solved intelligence,” there’s a lot to be said for discovery-based research. Much like in pure math, you just never know when and where the applications will arise.

But maybe Bobby Fischer, whose whole life was devoted to playing a board game — and who some would argue was driven mad by a board game — got it right in his letters 40 years ago. He was interested in computer chess, its implications and its possible commercial upside. But something trumped that.

“Mainly I think it’s a fun thing,” he wrote.

Read more: Don’t Forget Humans Created The Computer Program That Can Beat Humans At Go

CORRECTION (July 11, 4:10 p.m.): An earlier version of this article incorrectly said that artificial intelligence was better at bridge than the best human players of the game. That reference has been removed; the best humans are still better than the best bots at that game.