Uncovering and explaining how our digital world is changing — and changing us.

When the world’s smartest researchers train computers to become smarter, they like to use games. Go, the two-player board game born in China more than two millennia ago, remains the nut that machines still can’t crack.

Enter Google’s nerds. Demis Hassabis, the artificial intelligence savant behind Google DeepMind, hinted in a video interview that his secretive team has cracked Go.

Since Google plopped down $400 million last year for the AI startup, DeepMind has remained frustratingly silent. It has put out a couple papers of its training algorithms beating Atari games — using a combination of deep learning methods that have earned considerable respect in the insular AI world. But it has released little else.

In an interview with the Royal Society of London, Hassabis lets us look up its sleeve. “Maybe you will have a surprise about Go?” Hassabis’s interlocutor asked.

Hassabis smiles. “I can’t talk about it yet, but in a few months I think there will be quite a big surprise,” he replies. (The full interview is here.)

Last year, Wired went long on the challenge Go poses to machines. Here’s the relevant part:

Similarly inscrutable is the process of evaluating a particular board configuration. In chess, there are some obvious rules. If, ten moves down the line, one side is missing a knight and the other isn’t, generally it’s clear who’s ahead. Not so in Go, where there’s no easy way to prove why Black’s moyo is large but vulnerable, and White has bad aji. Such things may be obvious to an expert player, but without a good way to quantify them, they will be invisible to computers. And if there’s no good way to evaluate intermediate game positions, an alpha-beta algorithm that engages in global board searches has no way of deciding which move leads to the best outcome. Not that it matters: Go’s impossibly high branching factor and state space (the number of possible board configurations) render full-board alpha-beta searches all but useless, even after implementing clever refinements. Factor in the average length of a game — chess is around 40 turns, Go is 200 — and computer Go starts to look like a fool’s errand.

It’s a fool’s errand rich with visual patterns, good fodder for the type of machine intelligence that DeepMind practices.

Google has given DeepMind, which is based in London, a singular mission: Solve intelligence. This month, the unit hired Drew Purves, a researcher who spent eight years doing ecological modeling at Microsoft. At a recent conference, Purves declined to say what he was doing at DeepMind.

Last month, Facebook said its team of nerds is also working on beating Go.