Geoffrey Hinton was in high school when a friend convinced him that the brain worked like a hologram.

To create one of those 3-D holographic images, you record how countless beams of light bounce off an object and then you store these little bits of information across a vast database. While still in high school, back in 1960s Britain, Hinton was fascinated by the idea that the brain stores memories in much the same way. Rather than keeping them in a single location, it spreads them across its enormous network of neurons.

This may seem like a small revelation, but it was a key moment for Hinton – "I got very excited about that idea," he remembers. "That was the first time I got really into how the brain might work" – and it would have enormous consequences. Inspired by that high school conversation, Hinton went on to explore neural networks at Cambridge and the University of Edinburgh in Scotland, and by the early '80s, he helped launch a wildly ambitious crusade to mimic the brain using computer hardware and software, to create a purer form of artificial intelligence we now call "deep learning."

>'I get very excited when we discover a way of making neural networks better – and when that’s closely related to how the brain works.' Geoffrey Hinton

For a good three decades, the deep learning movement was an outlier in the world of academia. But now, Hinton and his small group of deep learning colleagues, including NYU's Yann LeCun and the University of Montreal's Yoshua Bengio, have the attention of the biggest names on the internet. After honing his ideas as a professor and researcher the University of Toronto in Canada, Hinton works part-time for Google, where he's using deep learning techniques to improve voice recognition, image tagging, and countless other online tools. LeCun is at Facebook, doing similar work. And artificial intelligence is suddenly all the rage at Microsoft, IBM, Chinese search giant Baidu, and many others.

While studying psychology as an undergrad at Cambridge, Hinton was further inspired by the realization that scientists didn't really understand the brain. They couldn't quite grasp how interactions among billions of neurons gave rise to intelligence. They could explain how electrical signals traveled down an axon – the cable-like protrusion that connects one neuron to another – but they couldn’t explain how these neurons learned or computed. For Hinton, those were the big questions – and the answers could ultimately allow us to realize the dreams of AI researchers dating back to the 1950s.

He doesn't have all the answers yet. But he's much closer to finding them, fashioning artificial neural networks that mimic at least certain aspects of the brain. "I get very excited when we discover a way of making neural networks better – and when that’s closely related to how the brain works," he says with youthful enthusiasm.

In Hinton's world, a neural network is essentially software that operates at multiple levels. He and his cohorts build artificial neurons from interconnected layers of software modeled after the columns of neurons you find in the brain's cortex – the part of the brain that deals with complex tasks like vision and language.

These artificial neural nets can gather information, and they can react to it. They can build up an understanding of what something looks or sounds like. They're getting better at determining what a group of words mean when you put them together. And they can do all that without asking a human to provide labels for objects and ideas and words, as is often the case with traditional machine learning tools.

As far as artificial intelligence goes, these neural nets are fast, nimble, and efficient. They scale extremely well across a growing number of machines, able to tackle more and more complex tasks as time goes on. And they're about 30 years in the making.

The Lunatic Core —————-

Back in the early '80s, when Hinton and his colleagues first started work on this idea, computers weren’t fast or powerful enough to process the enormous collections of data that neural nets require. Their success was limited, and the AI community turned its back on them, working to find shortcuts to brain-like behavior rather than trying to mimic the operation of the brain.

But a few resolute researchers carried on. According to Hinton and LeCun, it was rough going. Even as late as 2004 – more than 20 years after Hinton and LeCun first developed the "back-propagation" algorithms that seeded their work on neural networks – the rest of the academic world was largely uninterested.

>The AI community turned its back on them, working to find shortcuts to brain-like behavior rather than actually trying to mimic the operation of the brain.

But that year, with a small amount of funding from the Canadian Institute for Advanced Research (CIFAR) and the backing of LeCun and Bengio, Hinton founded the Neural Computation and Adaptive Perception program, an invite-only group of computer scientists, biologists, electrical engineers, neuroscientists, physicists, and psychologists.

Hand-picking these researchers, Hinton aimed to create a team of world-class thinkers dedicated to creating computing systems that mimic organic intelligence – or at least what we know about organic intelligence, what we know about how the brain sifts through a wealth of visual, auditory, and written cues to understand and respond to its environment. Hinton believed creating such a group would spur innovation in AI and maybe even change the way the rest of world treated this kind of work.

He was right.

Yoshua Bengio, an AI researcher and professor at the University of Montreal. Photo: Josh Valcarcel/WIRED

By the middle aughts, they had the computing power they needed to realize many of their earlier ideas. As they came together for regular workshops, their research accelerated. They built more powerful deep learning algorithms that operated on much larger datasets. By the middle of the decade, they were winning global AI competitions. And by the beginning the current decade, the giants of the web began to notice.

In 2011, an NCAP researcher and Stanford processor named Andrew Ng founded a deep learning project at Google, and today, the company is using neural networks to help recognize voice commands on Android phones and tag images on the Google+ social network. Last year, Hinton joined the company, alongside other researchers from the University of Toronto. The aim is to take this work even further.

Meanwhile, Baidu has followed suit with new AI labs in China and Silicon Valley. Microsoft is adding deep learning techniques to its own voice recognition research. And in hiring LeCun, Facebook is exploring new ways of targeting ads and identifying faces and objects in photos and videos.

Yann LeCun, an NYU artificial intelligence researcher who now works for Facebook. Photo: Josh Valcarcel/WIRED

Another NCAP researcher, Terry Sejnowski, is helping shape President Obama’s $100-million BRAIN Initiative, a project to develop a wide range of new tools for mapping neural circuits. Working alongside Hinton, Sejnowski invented the Boltzmann machine, one of the earliest neural nets, back in the early 1980s.

>'We ceased to be the lunatic fringe. We’re now the lunatic core.' Geoff Hinton

With just a half-a-million-dollar-a year-investment from CIFAR, Hinton's consortium of free thinkers is set to feed countless dollars back into the economy. It's already happening at Google. The return on investment for both Canada and the rest of the world has been tremendous, says Denis Therien, CIFAR’s vice president for research and partnerships.

In the process, Hinton and NCAP have changed the face of the community that once spurned them. Students at universities are turning away from more traditional machine learning projects to work on deep learning, says Max Welling, a computer scientist at the University of Amsterdam. "This information has trickled down all the way to the students who are sitting in the Netherlands, far away from where all this happens. They have all picked up on it. They all know about it," he says. "That to me is the ultimate evidence that this has propagated everywhere."

In other words, deep learning is now mainstream. "We ceased to be the lunatic fringe," Hinton says. "We’re now the lunatic core."

Hinton to the Future ——————–

This fall, the NCAP met up at the Sir Francis Drake Hotel in downtown San Francisco. The group does this once a year, convening for two days of workshops prior to the larger Neural Information Processing Systems, or NIPS, conference, the centerpiece of the AI year. The workshops explored a broad range of tasks that benefit from the marriage of neuroscience and machine learning, including computational graphic design, facial recognition, and motion detection.

During the presentations, Hinton stood quietly near the front of the room. Mostly, he listened, but occasionally, he would interrupt with a pointed question or encourage members of his brain trust to ask questions and prompt discussion. His quiet, humble, and fair leadership, NCAP members say, has created an open and collaborative atmosphere that has directly accelerated the world's AI work, redoubling their resolve to change the field.

NCAP’s December workshop at San Francisco’s Sir Francis Drake hotel. Photo: Josh Valcarcel/WIRED

>'We have never seen machine learning or artificial intelligence technologies so quickly make an impact in industry.' Kai Yu

The deep learning revolution was inevitable, they say, but developments like the speech recognition and artificial vision systems adopted by Microsoft, Google, Yahoo, and other giants of the web came sooner because of the NCAP – and Hinton in particular. "I think that’s what keeps the energy so positive – that Geoff is so engaged, and everyone looks up to Geoff," says Bruno Olshausen, the director of the Redwood Center for Theoretical Neuroscience at the University of California, Berkeley, and an NCAP member. "You think: 'If Geoff is listening, I’ve got to listen too.'"

Others outside the group agree. "Over the last 20 to 30 years, he has been pushing forward the frontier of neural networks and deep learning," says Kai Yu, the director of Baidu’s Institute of Deep Learning. "We have never seen machine learning or artificial intelligence technologies so quickly make an impact in industry. It's very impressive."

Hinton also travels the world giving talks on deep learning, and he mentors graduate students at the University of Toronto and beyond. Welling says that Hinton has a habit of suddenly yelling: "I understand how the brain works now!" It's an infectious thing. "He would do this every week," Welling says. "That's hard to match."

Through NCAP and CIFAR, Hinton runs a summer school for students to learn from NCAP members, working to foster the next generation of AI researchers. With so many commercial companies moving into the field, that is more important than ever. It's not just the tech giants who are joining the movement. We've seen a slew of deep learning startups, including companies like Ersatz, Expect Labs, and Declara.

Where will this next generation of researchers take the deep learning movement? The big potential lies in deciphering the words we post to the web – the status updates and the tweets and instant messages and the comments – and there’s enough of that to keep companies like Facebook, Google, and Yahoo busy for an awfully long time. The aim to give these services the power to actually understand what their users are saying – without help from other humans. "We want to take AI and CIFAR to wonderful new places,” Hinton says, "where no person, no student, no program has gone before."