In 2012, a group of scientists from the University of Toronto made an image-classification breakthrough.

At ImageNet, an annual artificial intelligence (AI) competition in which contestants vie to create the most accurate image-classification algorithm, the Toronto team debuted AlexNet, "which beat the field by a whopping 10.8 percentage point margin... 41 percent better than the next best," according to Quartz.

Deep learning, the method used by the team, was a radical improvement over previous approaches to AI and ushered in a new era of innovation. It has since found its way into education, healthcare, cybersecurity, board games, and translation, and has picked up billions of dollars in Silicon Valley investments.

Many have praised deep learning and its superset, machine learning, as the general-purpose technology of our era and more profound than electricity and fire. Others, though, warn that deep learning will eventually best humans at every task and become the ultimate job killer. And the explosion of applications and services powered by deep learning has reignited fears of an AI apocalypse, in which super-intelligent computers conquer the planet and drive humans into slavery or extinction.

But despite the hype, deep learning has some flaws that may prevent it from realizing some of its promise—both positive and negative.

Deep Learning Relies Too Much on Data

Deep learning and deep neural networks, which comprise its underlying structure, are often compared to the human brain. But our minds can learn concepts and make decisions with very little data; deep learning requires tons of samples to perform the simplest task.

At its core, deep learning is a complex technique that maps inputs to outputs by finding common patterns in labeled data and using the knowledge to categorize other data samples. For instance, give a deep-learning application enough pictures of cats, and it will be able to detect whether a photo contains a cat. Likewise, when a deep-learning algorithm ingests enough sound samples of different words and phrases, it can recognize and transcribe speech.

But this approach is effective only when you have a lot of quality data to feed your algorithms. Otherwise, deep-learning algorithms can make wild mistakes (like mistaking a rifle for a helicopter). When their data is not inclusive and diverse, deep-learning algorithms have even displayed racist and sexist behavior.

Reliance on data also causes a centralization problem. Because they have access to vast amounts of data, companies such as Google and Amazon are in a better position to develop highly efficient deep-learning applications than startups with fewer resources. The centralization of AI in a few companies could hamper innovation and give those companies too much sway over their users.

Deep Learning Isn't Flexible

Humans can learn abstract concepts and apply them to a variety of situations. We do this all the time. For instance, when you're playing a computer game such as Mario Bros. for the first time, you can immediately use real-world knowledge—such as the need to jump over pits or dodge fiery balls. You can subsequently apply your knowledge of the game to other versions of Mario, like Super Mario Odyssey, or other games with similar mechanics, such as Donkey Kong Country and Crash Bandicoot.

AI applications, however, must learn everything from scratch. A look at how a deep-learning algorithm learns to play Mario shows how different an AI's learning process is from that of humans. It essentially starts knowing nothing about its environment and gradually learns to interact with the different elements. But the knowledge it obtains from playing Mario serves only the narrow domain of that single game and isn't transferable to other games, even other Mario games.

This lack of conceptual and abstract understanding keeps deep-learning applications focused on limited tasks and prevents the development of general artificial intelligence, the kind of AI that can make intellectual decisions like humans do. That is not necessarily a weakness; some experts argue that creating general AI is a pointless goal. But it certainly is a limitation when compared with the human brain.

Deep Learning Is Opaque

Unlike traditional software, for which programmers define the rules, deep-learning applications create their own rules by processing and analyzing test data. Consequently, no one really knows how they reach conclusions and decisions. Even the developers of deep-learning algorithms often find themselves perplexed by the results of their creations.

This lack of transparency could be a major hurdle for AI and deep learning, as the technology tries to find its place in sensitive domains such as patient treatment, law enforcement, and self-driving cars. Deep-learning algorithms might be less prone to making errors than humans, but when they do make mistakes, the reasons behind those mistakes should be explainable. If we can't understand how our AI applications work, we won't be able to trust them with critical tasks.

Deep Learning Could Get Overhyped

Deep learning has already proven its worth in many fields and will continue to transform the way we do things. Despite its flaws and limitations, deep learning hasn't failed us. But we have to adjust our expectations.

As AI scholar Gary Marcus warns, overhyping the technology might lead to another "AI winter"—a period when overly high expectations and underperformance leads to general disappointment and lack of interest.

Marcus suggests that deep learning is not "a universal solvent but one tool among many," which means that while we continue to explore the possibilities that deep learning provides, we should also look at other, fundamentally different approaches to creating AI applications.

Even Professor Geoffrey Hinton, who pioneered the work that led to the deep-learning revolution, believes that entirely new methods will probably have to be invented. "The future depends on some graduate student who is deeply suspicious of everything I have said," he told Axios.

Further Reading

Software & Service Reviews