"Deep learning" refers to a complex neural network -- a computer program that mimics the human brain. Neural networks have been the driving force behind rapidly improving image and speech recognition tools, but mostly these tasks require an internet connection. When you talk to your Android phone, for example, your voice is actually processed by a server farm somewhere, which does all the heavy lifting.

More recently, Google managed to cram a network into its Translate app, allowing users to convert the text in images on the fly. And SwiftKey also runs a small-scale network for word predictions in its SwiftKey Neural application. But all these applications require a large amount of processing power for what are relatively inane tasks. That's where Movidius' chip comes in.

The Myriad 2 MA2450 is referred to as a "vision processing unit." It's really got a single purpose: image recognition. The architecture has very little in common with a traditional CPU, and it's designed specifically to handle the myriad (get it) simultaneous processes involved in neural networks. As such, its power draw when, for example, recognising a face or an image, is much, much lower than doing the same task with a Snapdragon processor. As for how exactly will Google utilize the chips, that's something we're unlikely to know until it's ready to announce devices.