An artificially intelligent Google machine just beat a human grandmaster at the game of Go, the 2,500-year-old contest of strategy and intellect that's exponentially more complex than the game of chess. And Nick Bostrom isn't exactly impressed.

Bostrom is the Swedish-born Oxford philosophy professor who rose to prominence on the back of his recent bestseller Superintelligence: Paths, Dangers, Strategies, a book that explores the benefits of AI, but also argues that a truly intelligent computer could hasten the extinction of humanity. It's not that he discounts the power of Google's Go-playing machine. He just argues that it isn't necessarily a huge leap forward. The technologies behind Google's system, Bostrom points out, have been steadily improving for years, including much-discussed AI techniques such as deep learning and reinforcement learning. Google beating a Go grandmaster is just part of a much bigger arc. It started long ago, and it will continue for years to come.

This AI race is not really just about which company is better at Go.

"There has been, and there is, a lot of progress in state-of-the-art artificial intelligence," Bostrom says. "[Google's] underlying technology is very much continuous with what has been under development for the last several years."

But if you look at this another way, it's exactly why Google's triumph is so exciting—and perhaps a little frightening. Even Bostrom says it's a good excuse to stop and take a look at how far this technology has come and where it's going. Researchers once thought AI would struggle to crack Go for at least another decade. Now, it's headed to places that once seemed unreachable. Or, at least, there are many people—with much power and money at their disposal—who are intent on reaching those places.

This isn't just about Google. It's about Facebook and Microsoft and and the other giants of tech. The effort to create the smartest AI has truly become a race, and the contestants are among the most powerful and wealthy people on the planet. The most telling part of Google's triumph may have been the reaction from Facebook founder Mark Zuckerberg.

Building a Brain

Google's AI system, known as AlphaGo, was developed at DeepMind, the AI research house that Google acquired for $400 million in early 2014. DeepMind specializes in both deep learning and reinforcement learning, technologies that allow machines to learn largely on their own. Previously, founder Demis Hassabis and his team had used these techniques in building systems that could play classic Atari videos games like Pong, Breakout, and Space Invaders. In some cases, these system not only outperformed professional game players. They rendered the games ridiculous by playing them in ways no human ever would or could. Apparently, this is what prompted Google's Larry Page to buy the company.

Using what are called neural networks—networks of hardware and software that approximate the web of neurons in the human brain—deep learning is what drives the remarkably effective image search tool build into Google Photos—not to mention the face recognition service on Facebook and the language translation tool built into Microsoft's Skype and the system that identifies porn on Twitter. If you feed millions of game moves into a deep neural net, you can teach it to play a video game. And with other massive datasets, you can teach neural nets to perform other tasks, including everything from generating results for the Google search engine to identifying computer viruses.

Reinforcement learning takes things a step further. Once you've built a neural net that's pretty good at playing a game, you can match it against itself. As two versions of this neural net play thousands of games against each other, the system tracks which moves yield the highest reward—that is, the highest score—and in this way, it learns to play the game at an even higher level. But again, the technique isn't limited to games. It could apply to anything that resembles a game, anything that involves strategy and competition.

AlphaGo uses all this. And then some. Hassabis and his team added a second level of "deep reinforcement learning" that looks ahead to the longterm results of each move. And they lean on traditional AI techniques that have driven Go-playing AI in the past, including the Monte Carlo tree search method, which basically plays out a huge number of scenarios to their eventual conclusions. Drawing from techniques both new and old, they built a system capable of beating a top professional player. In October, AlphaGo played a close-door match against the reigning three-time European Go champion, which was only revealed to the public on Wednesday morning. The match spanned five games, and AlphaGo won all five.

Epically Complex

Before this victory, many AI experts didn't think beating the best human players was possible—at least not this soon. In recent months, Facebook has worked on its own Go-playing AI system—though it hasn't dedicated nearly as many researchers to the project as DeepMind has. Last week, when we asked Yann LeCun, the deep learning founding father who oversees Facebook's AI work, whether Google may have secretly beaten a Go grandmaster, he said it was unlikely. "No. Maybe. No," he answered.

The problem is that Go is epically complex. An average turn in chess offers about 35 possible moves. A Go turn offers 250. After each one of those moves, there's another 250. And so on. This means that even the largest supercomputer can't look ahead to the results of every possible move. There are just too many of them. As Hassabis says, there are more possible Go positions than atoms in the universe. In order to crack the game, you need an AI that can do more than calculate. It needs to somehow mimic human sight, even human intuition. You need something that can learn.

That's why Google and Facebook are tackling this problem. If they can solve a problem of such enormous complexity, they can use what they learn as a springboard to AI systems that handle more practical tasks in the real world. Hassabis says that these technologies are a "natural fit" for robotics. They could allow robots to better understand their environment and respond to unforeseen changes in that environment. Imagine a machine that can do your dishes. But he also believes that these technologies can supercharge scientific research, providing a kind of AI assistant that can point researchers towards the next big breakthrough.

And that skips over some of the more immediate applications that will change your everyday life much sooner. DeepMind's techniques can help our smartphones not only recognize images and spoken words and translate from one language to another, but also understand language. These techniques are a path to machines that can grasp what we're saying in plain old English and respond to us in plain old English—a Siri that actually works.

Showing They're Serious

All this explains why Mark Zuckerberg was so eager to talk about Go in a Facebook status update hours before Google revealed it had secretly beaten a grandmaster.

Google's announcement arrived by way of a research paper published in the academic journal Nature, and Facebook employees had gotten their hands on the paper prior to its official release (it was shared with reporters two days before under a non-disclosure agreement). The result was a kind of pre-damage-control campaign from Zuckerberg and many others at the company.

The night before Google's announcement, Facebook AI researchers published a brand new research paper detailing their own work with Go—work that was impressive in its own right—and Zuckerberg trumpeted the paper from his Facebook account. "In the past six months, we've built an AI that can make moves in as fast as 0.1 seconds and still be as good as previous systems that took years to build," he said. "The researcher who works on this, Yuandong Tian, sits about 20 feet from my desk. I love having our AI team right near me so I can learn from what they're working on."

Never mind that Facebook's Go-playing AI isn't as far along as Google's AlphaGo. As LeCun points out, Facebook hasn't put as many resources on the Go problem as DeepMind has, and it hasn't spent as much time working on the problem. It's unclear why the company was so interested in highlighting its own work before Google's big day, but the reality is that Facebook—and Zuckerberg in particular—place enormous importance on this sort AI, and in this, they are very much in competition with Google, which also happens to be their biggest business rival. This AI race, however, is not really just about which company is better at Go. It's about which company can attract the top AI talent. Both Zuckerberg and LeCun know they must show the relatively small AI community that the company is serious about this stuff.

How serious? Well, it's telling that Zuckerberg measures the number of feet between him and Yuandong Tian. Inside Facebook, your importance is judged by how close you sit to Zuck. And, yes, Zuck is personally involved in this quest—very much so. This past New Year's Day, Zuckerberg said that his personal challenge for 2016 was to build an AI system that could help him at both home and work.

Gaming the Threat

Google and Facebook are intent on building artificial intelligence that will, in many ways, exceed the intelligence of humans. But they aren't the only two. Microsoft and Twitter and Elon Musk and so many others are pushing in the same direction. That's a great thing for AI research. And, for people like Nick Bostrom—and, well, Elon Musk—it's a scary thing as well.

As Chris Nicholson, the CEO and founder of deep learning startup Skymind points out, the kind of AI demonstrated by Go could apply to almost any problem that you can think of as a game—as something where strategy matters. This includes financial trading, he says, and war. Both cases require a lot more work—and a lot more data. But the thought alone is unsettling. Bostrom's book makes the case that AI could be more dangerous than nuclear weapons, not only because human could misuse it but because we could build AI systems that we are somehow not able to control.

This isn't even remotely possible with a system like AlphaGo. Yes, the system learns by itself—actually playing games against itself and generating data and strategy on its own. And yes, it can outperform most humans at the game of Go (we're still waiting for the big match against one of the world's very best players). But as complex as Go is, it's a limited universe—not nearly as complex as the real thing. And DeepMind's reseachers have complete control over the system. They can change it and shut it down as they like. In fact, it doesn't even makes sense to think about this particular machine as a danger.

The worry is that, as researchers continue to improve such systems, they will unknowingly cross a threshold where apocalyptic anxieties do start to make sense. Bostrom says that he and others at his Future of Humanity Institute are looking at ways that reinforcement learning could find its way outside the control of researchers. "Some of the same issues that would arise later in more sophisticated systems we can also find analogies in systems today," he says, explaining that there are small hints that reinforcement learning could lead to situations where machines resist being shut down.

But these are very small hints. Bostrom acknowledges that such dangers are far away—if they come at all. Thanks to his efforts and those of influential technologists like Elon Musk, the wider industry is wise to the potential dangers far earlier than they probably need to be. What these concerns show, more than anything, is that technologies like those under development at DeepMind are tremendously powerful.

Google's Go triumph shows the same thing. But its win is just a prelude. In March, AlphaGo will challenge Lee Sedol, the world's top Go player of the past decade, in a match of even greater importance. Sedol is significantly more talented than the Fan Hui, the European champion who lost in London. Fan Hui is ranked 633rd in the world, while Sedol is ranked 5th. Many experts believe that AlphaGo will win this heavyweight bout. If it does, well, that's just a prelude too.