Today, Microsoft announced that the next generation of its mixed reality HoloLens headset will incorporate an AI chip. This custom silicon — a “coprocessor” designed but not manufactured by Microsoft — will be used to analyze visual data directly on the device, saving time by not uploading it to the cloud. The result, says Microsoft, will be quicker performance on the HoloLens 2, while keeping the device as mobile as possible.

The announcement follows a trend among Silicon Valley’s biggest tech companies, which are now scrambling to meet the computational demands of contemporary AI. Today’s mobile devices, where AI is going to be used more frequently, simply aren’t built to handle these sorts of programs, and when they’re asked, the result is usually slower performance or a burned-out battery (or both).

But getting AI to run directly on devices like phones or AR headsets has a number of advantages. As Microsoft says, quicker performance is one of them, as devices don’t have to upload data to remote servers. This also makes the devices more user-friendly, as they don’t have to maintain a continuous internet connection. And, this sort of processing is more secure, as users’ data never leaves the device.

There are two main ways to facilitate this sort of on-device AI. The first is by building special lightweight neural networks that don’t require as much processing power. (Both Facebook and Google are working on this.) The second is by creating custom AI processors, architectures, and software, which is what companies like ARM and Qualcomm are doing. It’s rumored that Apple is also building its own AI processor for the iPhone — a so-called “Apple Neural Engine” — and now, Microsoft is doing the same for the HoloLens.

This race to build AI processors for mobile devices is running alongside work to create specialized AI chips for servers. Intel, Nvidia, Google, and Microsoft are all working on their own projects in this department. This sort of AI cloud power will service different needs to new mobile processors (it’ll primarily be sold directly to businesses), but from the viewpoint of designing silicon, the two goals are likely to be complementary.

Speaking to Bloomberg, Microsoft Research engineer Doug Burger said the company was taking the challenge of creating AI processors for servers “very seriously,” adding: “Our aspiration is to be the number one AI cloud.” Building out the HoloLens’ on-device AI capabilities could help with this goal, if only by focusing the company’s expertise on chip architectures needed to handle neural networks.

For the second generation HoloLens, the AI coprocessor will be built into its “Holographic Processing Unit” or HPU — Microsoft’s name for its central vision-processing chip. This handles data from all the device’s on-board sensors, including the head-tracking unit and infrared cameras. The AI coprocessor will be used to analyze this data use deep neural networks, one of the principal tools of contemporary AI. There’s still no release date for the HoloLens 2, but it’s reportedly arriving in 2019. When it lands, AI will be even more central for everyday computing, and that specialized silicon will likely be in high demand.