While HoloLens 2 is undoubtedly the aspirational star of Microsoft's augmented-reality (AR) offerings, the company isn't putting all its eggs in that particular basket. Alongside the new HoloLens headset, the company also announced the Azure Kinect development kit: a new version of the Kinect sensor technology.

Though Kinect first shipped as a gaming peripheral, the accessory immediately piqued the interest of researchers and engineers who were attracted to its affordable depth sensing and skeleton tracking. While Kinect is no longer a going concern for the Xbox, the same technology is what gives HoloLens its view of the world and is now available as a device purpose-built for developing scientific and industrial uses for the technology. The Azure Kinect Developer Kit (DK) bumps up the specs substantially when compared to the old Xbox accessory; it includes a 12MP RGB camera, 1MP depth camera, and 360-degree microphone array made up of seven microphones. It also contains an accelerometer and gyroscope. Early customers are using it for applications like detecting when hospital patients fall over, automating the unloading of pallets in warehouses, and creating software that compares CAD models to the physical parts built from those models.

The original Kinect used only local compute power; indeed, one of the many criticisms made of it was that the Xbox devoted a certain amount of processing power to handling Kinect data. The new hardware still depends on local computation, with things like skeleton tracking handled in software on the PC the hardware is connected to. But the Azure name is not just there for fun; Microsoft views the Kinect as a natural counterpart to many Azure Cognitive Services, using the Kinect to provide data that's then used to train machine-learning systems, feed image- and text-recognition services, perform speech recognition, and so on.

To the cloud!

Microsoft is also building new services explicitly for mixed-reality applications. Azure Spatial Anchors is a service that allows multiple AR platforms (including ARKit on iOS, ARCore on Android, Unity, and of course HoloLens and the Universal Windows Platform) to share maps and waypoints between devices so that many people can move around in and interact with a consistent, shared 3D model of the world. Someone on an iPhone could annotate a piece of a building—highlighting a defect that needs repair, laying down the furniture they want to install, pinning up a picture they like—and other people will be able to see that annotation in exactly the place it's meant to be.

Azure Remote Rendering is a service that provides, as the name might imply, 3D rendering. While end-user hardware all contains 3D graphics acceleration, there are limits to the size and complexity of the models that this can handle. For complex CAD models used in engineering and architecture, more GPU power is needed. Azure Remote Rendering is Microsoft's solution: instead of rendering simplified models on the device, you can render full-fidelity models in the cloud and see that data streamed to the device in real time.

Details on the service are currently scarce; it has been announced and not much more than that, but we wonder if this might be a second area where gaming technology has made its way into the enterprise. The forthcoming Xcloud game-streaming service needs low-latency, remote-rendered 3D graphics, which is precisely the same problem as faced by Remote Rendering. It wouldn't be too surprising to discover that Remote Rendering and Xcloud share a common heritage.