Neural networks are made up of lots of basic, interconnected information processors that are interconnected. Typically, these networks learn how to perform tasks by analyzing huge sets of data and applying that to novel tasks. They're used for now-typical things like speech recognition, photo manipulation, as well as more novel tasks, like reproducing what your brain actually sees and creating quirky pickup lines and naming craft beers.

The problem is that neural nets are big, and the computations they run through are power-intensive. The ones in your phone tend to be tiny for that reason, which limits their ultimate practicality. In addition to power decreases, the new MIT chip increases the computation speed of neural networks by three to seven times over earlier iterations. The researchers were able to simplify the machine-learning algorithms in neural networks to a single point, called a dot product. This represents all the back and forth movement of various nodes in the neural network and obviates needing to pass that data back and forth to memory, like in earlier designs. The new chip can calculate dot products for multiple nodes (16 nodes in the prototype) in one step instead of moving the raw results of every computation between the processor and memory.

IBM's vice president of AI Dario Gil thinks this is an huge step forward. "The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays," he said in a statement. "It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT in the future."