Not every problem has a thousand solutions, but finding a problem with only one is very rare. That’s why when AI researchers seek to solve problems, they use a method called “optimization” to find not only the solution to a problem, but to find the best solution to that problem.

Imagine the nature of the problem as a mountain. Every time they run an algorithm to seek a solution, they’re looking for the mountain peak.

Finding that peak involves running the algorithm over and over. Like climbers making their way up the mountain, the algorithm tests solutions and then compares them against each other, looking for the best one.

However, if all of the steps start out at the same point, they can only ever climb one mountain. What if it turns out that they’re standing in a mountain range — that there’s a better solution out there, using a different starting point?

To ensure that researchers don’t get stuck on a small mountain range, they use diverse techniques and random restarts while optimizing. This introduces an element of the chaotic, and makes sure that you don’t end up with a poor local optimum. This is a foundational element of machine learning — but it’s one researchers have forgotten when looking at the field of artificial intelligence. Because creating true general artificial intelligence is a problem, and lately, we’re exploring only one path to find its solution.

Researchers have begun to focus exclusively on deep learning, to the potential detriment of optimization. No one knows if deep learning will end up being a local optimum, or if it will be the peak we were searching for all this time.