Posey's Tips & Tricks

HoloLens 2 Wish List: What Microsoft Could (And Should) Improve

Once a milestone of modern technology, Microsoft's original mixed-reality device is now starting to show its age.

When Microsoft first announced HoloLens back in 2015, the device seemed (at least to me) like an exponential leap forward in technological capability.

The futuristic HoloLens device was capable of things that hardly seemed possible. In fact, I vividly remember showing some of the demo videos to a few people who bluntly stated that those demos could not possibly be real.

In the years since the HoloLens was released, I have had numerous opportunities to use the device, most recently in zero gravity. While I will be the first to tell you that HoloLens is indeed every bit as amazing as the 2015 demo videos made it out to be, I will also be the first to tell you that it is starting to show its age.

In 2015, the HoloLens was nothing short of a miracle of modern technology. Today, however, the device seems kind of bulky, and its computing power does not measure up to current-generation mobile computing devices.

Thankfully, Microsoft has been hard at work on a second-generation HoloLens device. Some reports indicate that Microsoft will formally announce the device before the end of 2018, while other sources state that the announcement will be pushed to the first quarter of 2019. My guess is that Microsoft will probably make the announcement at the Consumer Electronics Show (CES) in Las Vegas this coming January. After all, the show is known first and foremost for showcasing flashy -- dare I say drool-worthy -- tech gadgets.

Microsoft has not yet given us much information about the next HoloLens. We do know, however, that the device will be based on the ARM architecture and will feature a next-generation holographic processor and an artificial intelligence (AI) co-processor.

So with that said, let me tell you what I think we can probably expect from the second-generation HoloLens device.

The one thing that I think Microsoft absolutely has to do is to improve the device's display. My guess is that Microsoft will focus most of its marketing efforts on the capabilities that are unleashed by the new processor and co-processor (which I will talk about more in a moment), but it will ultimately be the display that really gets people's attention.

The original HoloLens has a notoriously narrow field of view, especially vertically. The display's colors also sometimes appear washed out, especially when using the device in brightly lit areas. Personally, I would be shocked if Microsoft did not improve the device's field of view and the display's color, brightness and resolution. In fact, those improvements alone would be enough to entice me into buying a HoloLens 2.

But what about all of the spiffy new processing hardware that is going to be embedded in the new device? I haven't seen anything definitive about the new capabilities that will be unleashed by the new hardware, but there are certainly a few things that come to mind.

Right now, when you use a HoloLens device, one of the first things it has to do is map your surroundings. Depending on what type of environment you are in and what application you are running, this process can take some time. The new holographic processor will presumably make this mapping process faster. Even so, I think that faster spatial mapping isn't the big story.

As previously mentioned, the new HoloLens will include a built-in AI co-processor. While this co-processor will probably help the device run AI-related apps, there are two more fundamental things that I expect it to do. First, the AI processor may help the HoloLens recognize areas that it has already mapped. If you use the device in the same room every time, there is no need to completely remap the room from scratch every time. Instead, the AI co-processor may help the device realize that it is operating in a known space. As such, it might do a light scan just to see if anything has moved.

This brings me to the next native capability that I expect to see exposed through the AI co-processor. My guess is that the AI co-processor will help with object recognition. It's one thing for the HoloLens to realize that there is a physical object located at a specific place in the room. It's quite another thing for HoloLens to identify that object as a chair.

Think about it for a moment. If HoloLens is able to identify common objects such as furniture items, it may help to expedite the mapping process because HoloLens can figure out what items are likely to move, and what are likely to be stationary. Furthermore, object recognition may make it possible to build apps that interact with those objects that have been identified.

One last capability that I really hope to see in the next HoloLens is an easier way to have shared experiences so that everyone in the room can see and interact with the same holograms. This can be done today using the first-generation HoloLens, but creating a shared experience is a surprisingly complex operation and requires a back-end server. It would be nice to be able to create a collaborative space on the fly without the need for any special hardware or code.