While the next-generation HoloLens does not have a launch date yet, we now have a better idea of how big a leap the device will take in terms of depth sensor performance.

At the recent Conference on Computer Vision and Pattern Recognition, held in Salt Lake City, Utah in June, Microsoft researchers gave a tutorial showing off the new HoloLens Research Mode, which gives developers access to the device's sensor data.

During the tutorial, the researchers showed the audience a preview of the depth sensor feed from the Project Kinect for Azure, which Microsoft unveiled earlier this year as the sensor for the next version of HoloLens.

Images by CVPR/YouTube

Video from that presentation has now been made public. The footage shows the level of detail that the Kinect sensor is capable of achieving in rendering a point cloud, with even lanyards and wrinkles in clothing visible in the data feed.

The sensor's higher frame rate at long range is also on display, and the sensor captures audience members as far as eight rows back, while the point cloud (below right) shows details of chairs and people.

Images by CVPR/YouTube

Compare this to the Research Mode footage from the current generation HoloLens from the same presentation (at 18:00 in the presentation video), or the video embedded here (bottom of the page), and the improvement is clear.

According to reports, HoloLens 2.0 is expected to arrive sometime next year. Based on the beefed-up capabilities shown in this early preview, it'll be worth the wait.