Everything we’re injecting artificial intelligence into—self-driving vehicles, robot doctors, the social-credit scores of more than a billion Chinese citizens and more—hinges on a debate about how to make AI do things it can’t, at present. What was once merely an academic concern now has consequence for billions of dollars’ worth of talent and infrastructure and, you know, the future of the human race.

That debate comes down to whether or not the current approaches to building AI are enough. With a few tweaks and the application of enough brute computational force, will the technology we have now be capable of true “intelligence,” in the sense we imagine it exists in an animal or a human?

On one side of this debate are the proponents of “deep learning”—an approach that, since a landmark paper in 2012 by a trio of researchers at the University of Toronto, has exploded in popularity. While far from the only approach to artificial intelligence, it has demonstrated abilities beyond what previous AI tech could accomplish.

The “deep” in “deep learning” refers to the number of layers of artificial neurons in a network of them. As in their biological equivalents, artificial nervous systems with more layers of neurons are capable of more sophisticated kinds of learning.

To understand artificial neural networks, picture a bunch of points in space connected to one another like the neurons in our brains. Adjusting the strength of the connections between these points is a rough analog for what happens when a brain learns. The result is a neural wiring diagram, with favorable pathways to desired results, such as correctly identifying an image.