Two years ago Stanford professor Andrew Ng joined Google's X Lab, the research group that's given us Google Glass and the company's driverless cars. His mission: to harness Google's massive data centers and build artificial intelligence systems on an unprecedented scale.

He ended up working with one of Google's top engineers to build the world's largest neural network; A kind of computer brain that can learn about reality in much the same way that the human brain learns new things. Ng's brain watched YouTube videos for a week and taught itself which ones were about cats. It did this by breaking down the videos into a billion different parameters and then teaching itself how all the pieces fit together.

But there was more. Ng built models for processing the human voice and Google StreetView images. The company quickly recognized this work's potential and shuffled it out of X Labs and into the Google Knowledge Team. Now this type of machine intelligence – called deep learning – could shake up everything from Google Glass, to Google Image Search to the company's flagship search engine.

It's the kind of research that a Stanford academic like Ng could only get done at a company like Google, which spends billions of dollars on supercomputer-sized data centers each year. "At the time I joined Google, the biggest neural network in academia was about 1 million parameters," remembers Ng. "At Google, we were able to build something one thousand times bigger."

Ng stuck around until Google was well on its way to using his neural network models to improve a real-world product: its voice recognition software. But last summer, he invited an artificial intelligence pioneer named Geoffrey Hinton to spend a few months in Mountain View tinkering with the company's algorithms. When Android's Jellly Bean release came out last year, these algorithms cut its voice recognition error rate by a remarkable 25 percent. In March, Google acquired Hinton's company.

Now Ng has moved on (he's running an online education company called Coursera), but Hinton says he wants to take this deep learning work to the next level.

A first step will be to build even larger neural networks than the billion-node networks he worked on last year. "I'd quite like to explore neural nets that are a thousand times bigger than that," Hinton says. "When you get to a trillion [parameters], you're getting to something that's got a chance of really understanding some stuff."

Hinton thinks that building neural network models about documents could boost Google Search in much the same way they helped voice recognition. "Being able to take a document and not just view it as, "It's got these various words in it," but to actually understand what it's about and what it means," he says. "That's most of AI, if you can solve that."

Photo: Ferrari Test images labeled by Hinton's brain. Image: Geoff Hinton

Hinton already has something to build on. Google's knowledge graph: a database of nearly 600 million entities. When you search for something like "The Empire State Building," the knowledge graph pops up all of that information to the right of your search results. It tells you that the building is 1,454 feet tall and was designed by William F. Lamb.

Google uses the knowledge graph to improve its search results, but Hinton says that neural networks could study the graph itself and then both cull out errors and improve other facts that could be included.

Image search is another promising area. "'Find me an image with a cat wearing a hat.' You should be able to do that fairly soon," Hinton says.

Hinton is the right guy to take on this job. Back in the 1980s he developed the basic computer models used in neural networking. Just two months ago, Google paid an undisclosed sum to acquire Hinton's artificial intelligence company, DNNresearch, and now he's splitting his time between his University of Toronto teaching job, and working for Jeff Dean on ways to make Google's products smarter at the company's Mountain View campus.

In the past five years, there's been a mini-boom in neural networking as researchers have harnessed the power of graphics processors (GPUs) to build out ever-larger neural networks that can quickly learn from extremely large sets of data.

"Until recently... if you wanted to learn to recognize a cat, you had to go and label tens of thousands of pictures of cats," says Ng. "And it was just a pain to find so many pictures of cats and label then."

Now with "unsupervised learning algorithms," like the ones Ng used in his YouTube cat work, the machines can learn without the labeling, but to build the really large neural networks, Google had to first write code that would work on such a large number of machines, even when one of the systems in the network stopped working.

It typically takes a large number of computers sifting through a large amount of data to train the neural network model. The YouTube cat model, for example, was trained on 16,000 chip cores. But once that was hammered out, it took just 100 cores to be able to spot cats on YouTube.

Google's data centers are based on Intel Xeon processors, but the company has started to tinker with GPUs because they are so much more efficient at this neural network processing work, Hinton says.

Google is even testing out a D-Wave quantum computer, a system that Hinton hopes to try out in the future.

But before then, he aims to test out his trillion-node neural network. "People high up in Google I think are very committed to getting big neural networks to work very well," he says.