Yann LeCun is among those bringing a new level of artificial intelligence to popular internet services from the likes of Facebook, Google, and Microsoft.

As the head of AI research at Facebook, LeCun oversees the creation of vast "neural networks" that can recognize photos and respond to everyday human language. And similar work is driving speech recognition on Google's Android phones, instant language translation on Microsoft's Skype service, and so many other online tools that can "learn" over time. Using vast networks of computer processors, these systems approximate the networks of neurons inside the human brain, and in some ways, they can outperform humans themselves.

This week in the scientific journal Nature, LeCun—also a professor of computer science at New York University—details the current state of this "deep learning" technology in a paper penned alongside the two other academics most responsible for this movement: University of Toronto professor Geoff Hinton, who's now at Google, and the University of Montreal's Yoshua Bengio. The paper details the widespread progress of deep learning in recent years, showing the wider scientific community how this technology is reshaping our internet services—and how it will continue to reshape them in the years to come.

But as LeCun tells WIRED, deep learning will also extend beyond the internet, pushing into devices that can operate here in the physical world—things like robots and self-driving cars. Just last week, researchers at the University of California at Berkeley revealed a robotic system that uses deep learning tech to teach itself how to screw a cap onto a bottle. Early this year, big-name chip maker Nvidia and an Israeli company called Mobileye revealed that they were developing deep learning systems that can help power self-driving cars.

LeCun has been exploring similar types of "robotic perception" for over a decade, publishing his first paper on the subject in 2003. The idea was to use deep learning algorithms as a way for robots to identify and avoid obstacles as they moved through the world—something not unlike what's needed with self-driving cars. "It's now a very hot topic," he says.

Yes, Google and some many others have already demonstrated self-driving cars. But according to researchers, including LeCun, deep learning can advance the state of the art—just as it has vastly improved technologies such as image recognition and speech recognition. Deep learning algorithms date back to the 1980s, but now that they can tap the enormously powerful network of machines available to today's companies and researchers, they provide a viable way for systems to teach themselves tasks by analyzing enormous amounts of data.

"This is a chance for us to change the model of learning from very shallow, very confined statistics to something extremely open-ended," Sebastian Thrun, who helped launched the Google self-driving car project, said of deep learning in an interview this past fall.

Thrun has left Google, but odds are, the company is already exploring the use of deep learning techniques with its autonomous cars (the first of which are set to hit the road this summer). According to Google research fellow Jeff Dean, the company is now using these techniques across dozens of services, and self-driving cars, which depend so heavily on image recognition, are one of the more obvious applications.

Trevor Darrell, one of the researchers working on deep learning robots at Berkeley, says his team is also exploring the use of the technology in autonomous automobiles. "From a researchers perspective, their are many commonalities in what it takes to move an arm to insert a peg into a hole and what it takes to navigate a car or a flying vehicle through an obstacle course," he says.

Deep learning is particularly interesting, he says, because it has transformed so many different areas of research. In the past, he says, researchers used very separate techniques for speech recognition, image recognition, translation, and robotics. But now one this one set of techniques—though a rather broad set—can serve all these fields.

The result: all of these fields are suddenly evolving at a much faster rate. Face recognition has hit the mainstream. So has speech recognition. And the sort of autonomous machines his team is working on, Darrell says, could reach the commercial market within the next five years. AI is here. But it will soon arrive in a much bigger way.