I had a few ideas for projects on Microsoft HoloLens, but for each project a few things were missing. At least thinking about these resulted in a list of features and hardware improvement that I would hope to have within next few years (all are realistic?):

Electrochromic Visor — I understand that visor currently has to be darkened in order to see holograms at all in bright environments, but Electrochromic glass is able to darken when electric current is applied, so it could automatically adjust to environmental conditions. Even a Photochromic Polycarbonate would be good to have (some of us like “light-adaptive lenses” on our eyeglasses that darken according to the ambient light). This way we could also write code through a clear vizor, which bring me to next point:

— I understand that visor currently has to be darkened in order to see holograms at all in bright environments, but Electrochromic glass is able to darken when electric current is applied, so it could automatically adjust to environmental conditions. Even a would be good to have (some of us like “light-adaptive lenses” on our eyeglasses that darken according to the ambient light). This way we could also write code through a clear vizor, which bring me to next point: Smoother visor — currently there are some horizontal imperfections resulting with lines that distort the real view. These distortions are usually imperceptible, but makes it hard to write code and read text while wearing HoloLens. I do not know what material visor is made of, but I will vote for some type of polycarbonate. It has OK optical properties, but it is shatter-proof so I hope we could even use it safely while driving. I say “OK” optical properties, as it is used for eyeglasses (in USA, not in Europe) and as it has high Abbe value (color separation on higher angles), but since light passes through the visor at almost 90deg angles, that may not be an issue.

— currently there are some horizontal imperfections resulting with lines that distort the real view. These distortions are usually imperceptible, but makes it hard to write code and read text while wearing HoloLens. I do not know what material visor is made of, but I will vote for some type of polycarbonate. It has OK optical properties, but it is shatter-proof so I hope we could even use it safely while driving. I say “OK” optical properties, as it is used for eyeglasses (in USA, not in Europe) and as it has high Abbe value (color separation on higher angles), but since light passes through the visor at almost 90deg angles, that may not be an issue. Hand tracking — something like with Leap Motion on Vive and gesture recognition at least capable as Orion SDK. API should be capable of producing Events like PalmUp, FingerBent…

— something like with Leap Motion on Vive and gesture recognition at least capable as Orion SDK. API should be capable of producing Events like PalmUp, FingerBent… Per-pixel occlusion for real-time occlusion of moving objects (for example: hands, related to the above point).

for real-time occlusion of moving objects (for example: hands, related to the above point). Precise 6-axis controller with additional button to invoke Cortana / voice commands.

with additional button to invoke Cortana / voice commands. Direct connection to a PC with faster GPU. Streaming games from Xbox to Windows 10 PC seems like a magic with zero lag, so it may be possible to stream it to HoloLens in similar way. Even better, the 3D geometry could be maybe rendered by PC’s GPU as a volumetric video and left to the HoloLens do the spatial matrix transformation between streamed frames according to its spatial awareness.

with faster GPU. Streaming games from Xbox to Windows 10 PC seems like a magic with zero lag, so it may be possible to stream it to HoloLens in similar way. Even better, the 3D geometry could be maybe rendered by PC’s GPU as a and left to the HoloLens do the spatial matrix transformation between streamed frames according to its spatial awareness. Using a second RGB camera and Photogammetry Stereo 3D reconstruction in addition to existing Time-of-flight camera to solve the issues of “seeing through” glossy or dark objects. For example HoloLens no not see most of my black Ikea furniture, especially table legs, and with stereo vision reconstruction at least the edges would be resolved in 3d space, and then it could discover black areas and artificially fill-in the missing information. Also, the RGB cameras should get bigger sensor as the current one has to use longer exposure time (motion blur) which would limit its use for the above mentioned application.

and Photogammetry Stereo 3D reconstruction in addition to existing Time-of-flight camera to solve the issues of “seeing through” glossy or dark objects. For example HoloLens no not see most of my black Ikea furniture, especially table legs, and with stereo vision reconstruction at least the edges would be resolved in 3d space, and then it could discover black areas and artificially fill-in the missing information. Also, the RGB cameras should get as the current one has to use longer exposure time (motion blur) which would limit its use for the above mentioned application. API for tracking moving objects — that probably has to work with RGB+ToF camera.

— that probably has to work with RGB+ToF camera. API to automatically get textured spatial mesh in order to use it as reflection map for holograms. That would probably mean that spatial mesh should be rather static instead of remeshing every second like now. It would also mean that we would need to abandon Unity as the reflections in it are just a bad improvisation. This would also let us get lighting information of our environment so the holograms shading and casted shadows would make it fir perfectly into a real environment.

in order to use it as reflection map for holograms. That would probably mean that spatial mesh should be rather static instead of remeshing every second like now. It would also mean that we would need to abandon Unity as the reflections in it are just a bad improvisation. This would also let us get lighting information of our environment so the holograms shading and casted shadows would make it fir perfectly into a real environment. Replacing using a micro-USB port with a securely connected cable (maybe magnetic connector for safety reasons). In my case, the gravity is almost enough for cable to slip out. Location of the port also has to be changed as the cable ends up between my head and the “loop”.

port with a securely connected cable (maybe magnetic connector for safety reasons). In my case, the gravity is almost enough for cable to slip out. Location of the port also has to be changed as the cable ends up between my head and the “loop”. Eye tracking — controlling a cursor with our neck and forcing it to make fine positioning was never something biology intended for us. This Atlanto-occipital joint and neck muscles are not designed for that. Using dual-eye-tracking and measuring vergence would get pretty accurate 3D cursor.

— controlling a cursor with our neck and forcing it to make fine positioning was never something biology intended for us. This Atlanto-occipital joint and neck muscles are not designed for that. Using dual-eye-tracking and measuring vergence would get pretty accurate 3D cursor. Field of View improvements — but this is the obvious one. I think this is up to the manufacturer of the waveguides to make a breakthrough innovation, so probably out of Microsoft’s control. I am pretty sure it could go a few degrees more in both directions with current technology as the holograms fidelity on the edges of FOV is not degraded, so it could be pushed a bit further

All of these are current technology that just has to find a way onto the HoloLens. I hope that I did not ruin the day to someone from HoloLens team with these complicated feature list, if they happen to find this post. But, if I am looking ahead, I want to look far!