As powerful as reinforcement learning is, Dr. LeCun says he believes that other forms of machine learning are more critical to general intelligence.

“My money is on self-supervised learning,” he said, referring to computer systems that ingest huge amounts of unlabeled data and make sense of it all without supervision or reward. He is working on models that learn by observation, accumulating enough background knowledge that some sort of common sense can emerge.

“Imagine that you give the machine a piece of input, a video clip, for example, and ask it to predict what happens next,” Dr. LeCun said in his office at New York University, decorated with stills from the movie “2001: A Space Odyssey.” “For the machine to train itself to do this, it has to develop some representation of the data. It has to understand that there are objects that are animate and others that are inanimate. The inanimate objects have predictable trajectories, the other ones don’t.”

After a self-supervised computer system “watches” millions of YouTube videos, he said, it will distill some representation of the world from them. Then, when the system is asked to perform a particular task, it can draw on that representation — in other words, it can teach itself.

Dr. Cox at the MIT-IBM Watson AI Lab is working similarly, but combining more traditional forms of artificial intelligence with deep networks in what his lab calls neuro-symbolic A.I. The goal, he says, is to build A.I. systems that can acquire a baseline level of common-sense knowledge similar to that of humans.