At today’s annual Microsoft Build conference we got a new glimpse at the vaunted Microsoft HoloLens platform Windows Holographic, complete with a cool live demonstration as well as a number of interesting details about the project.

As Darren (our first exhibitor) walked out on stage, Alex Kipman – one of the main developers behind HoloLens, explained that he was seeing the augmented projection of the globe in his view. From the viewer’s perspective we were able to see what he was seeing from the camera’s perspective using a “custom camera rig” that allows the camera to see the same holograms as the HoloLens. The setup appeared to be using a Red camera at its core.

“This mixed reality grants us permission to reinvent productivity by creating experiences not possible on any other device or any other platform,” said Kipman triumphantly during the presentation. While there is no way of knowing that we actually were seeing what the exhibitors were seeing, it was an incredibly impressive demonstration that really showcased the device’s potential, and gave us some further insight into it.

The first thing we got a glimpse at was the new platform, Windows Holographic which runs on Windows 10. The interface showed off a lot of functionality. Screens were projected, locked in space (as the camera movement was meant to show off), onto the walls in the mock living room. On them were a number of “Universal Windows applications”, which all can work easily and natively with Windows 10 and Windows Holographic. But it was more than just screens, 3D shapes were displayed in the living room environment, like the fishbowl beach scene that showed the weather, or the robot on the coffee table (who we would be introduced to later). All of these holograms truly appeared locked in space, which if they have accomplished to this level – with this kind of detail – is an incredible technological feat.

But the demonstration was just beginning as Darren fired up the UI next. In terms of design, the UI appears to fall in line with the rest of Windows, with a flat gridded UI that uses simple, bold colors. Extending his hand and using an palm-up gesture, Darren opens up a flat, floating menu with nine icons. As he moves his finger through the air, a cursor in the augmented environment can be seen floating in front of the icons, complete with a wide ghosted square outline. This would appear to help with interaction by making it easier to detect where you are pointing precisely. A quick tap motion with his finger (imagine the motion you use to take a picture) selects the icon. I tend to believe this motion, rather than a simple push button motion, is purposeful and a nice nuanced step for gesture based input. Rather than pushing into air, the ‘camera click motion’ provides some sense of natural tactile feedback that assists with interactivity (it’s hard to push something that doesn’t push back). Based on the frequency of it’s use during the presentation, gesture based input seems to be a big part of what HoloLens is trying to accomplish.

Gesture input is only part of the mix, we saw a couple other forms of input demonstrated in conjunction with the device. One that got the crowd’s attention was using voice commands to have content follow Darren as he walked around the room. By simply saying “follow me” and walking, the screen he was watching before began to travel with him, no longer anchored in space. It was really cool, and is something that has a number of great potential use cases (beyond not taking your eye off the game when you get up to grab a beer). Microsoft also showed off using a mouse with the HoloLens during an architectural demonstration by Trimble, where the cursor moved through 3D space; meaning that it seems that peripheral devices will be able to be attached to the HoloLens.

Microsoft’s approach to the next generation of computing is all about connecting the digital world and the real one. As Kipman said, the HoloLens “amplifies what is human about each of us.” That is the transformative power of AR and VR. Each of the rest of the demos sought to show off that transformative power, by showing how the technology will affect more than just one industry – but rather will affect the future of human computer interaction in every faucet of our lives, enabling an entire new plane of productivity.

“This mixed reality grants us permission to reinvent productivity” – Alex Kipman

In the construction and architecture industries, Microsoft has been working with Trimble on a number of cool projects ranging from 3D modeling in AR to socially connected workplaces that allow foremen to virtually check in on different parts of the construction site and help solve problems using advanced overlays. The video demonstration, while rendered, showcased an incredible image of a virtual foreman next to one of the workers as they discussed placement of a support beam (I think, I’m not versed in the construction realm). In the demonstration the virtual foreman and the worker were able to collaborate on a projected model live in the real world, planning the best course of action.

Another demo showcased AR’s use in the medical and educational fields with a really cool anatomy demonstration. A human body was projected in the room, which then was split into copies, each showing different layers, from bones, to vessels, to nerves, to muscles, etc. It was also paired with a social interface that in the demonstration allowed a doctor to chime in and ask for a consultation on a fracture, a very practical use for AR indeed.

The last demo may have actually been the coolest, showing how AR and the real world can interact together using robotics. We were introduced to ‘Miko’ a unique telepresence robot that looks fairly unassuming – until you put on the HoloLens and it really comes to life. Floating above the robot was a friendly looking avatar of Miko. The avatar chirped and buzzed to life in a cute and comical fashion, before really showing off some cool stuff with the HoloLens.

One of the features that makes the HoloLens work is its ability to spatially scan an environment, much like Google’s Project Tango. But this data can be used for more than just placing screens and images in 3D space, it can also transmit that environment scan to a robot allowing you to control it using gestures in AR. The demonstration showed the exhibitor creating a path for the robot to follow, mapping from point A to point B. From there they showed off obstacle avoidance with the robot using the live scanning by having Kipman stand directly in its path. The robot was able to quickly use the scan data and remap its route to the same point.

Also of interest with the robot demo was the “Universal Spacial UI” system, which operated very similarly to the gesture based UI they demonstrated earlier in presentation, but had a very different look. This UI system would be something that developers could integrate into their apps as well, providing at least a stepping stone towards developing some standards for UI design in AR.

Microsoft wouldn’t be able to do all the amazing things it showed off on stage today if it wasn’t for collaboration with a number of partners, some of which Microsoft showed us on stage today. Among the biggest highlights from the list are Disney, NASA, and Unity – but the rest of the list shows some interesting names like Legendary Pictures. With so much VR advertising content for movies, it will be really interesting to see how AR movie ads will form as well. Absent from the list, other than Unity, are big name game developers. After Microsoft showcased Minecraft on the HoloLens in the first promotional video, it seems conspicuous that we wouldn’t see any of those names on the list – so we may see more about this at E3.

When the device was first announced, the specs we had on it were extremely limited. The device has an “HD” display (a term that could be as broad as I have been told the FOV is narrow), and also has 3D audio support and some advanced onboard sensors – which likely help stabilize images and track the environment.

What we learned today didn’t add too much more to raw specs, but it did give a clear picture of the kind of device we can expect. It will be fully self contained, not needing to connect to a phone or a PC. It will be completely wireless, with an onboard battery. It will not use any external cameras and will use no markers, meaning likely that the positional tracking is, in fact, internal which is quite the accomplishment, if it is the case – one that hopefully will be open sourced so that we can get it in Mobile VR headsets soon. The device’s enclosure “wraps around the users head and provides great weight distribution,” helping to make the device more comfortable.

The device also has between five and seven cameras (depending on what those are at the bridge of the nose), facing out from the front of the device which are reading the world in real time “in a very power efficient way.” Interestingly, the device also has four silver reflective surfaces, this may be protecting some kind of infrared camera like the ones on the DK2’s depth sensor but it is too early to say what they might be exactly.

Microsoft brought “hundreds” of HoloLenses to build this year, which suggests they have already started manufacturing at least the development kits, if not the consumer product (they didn’t clarify if the version they would be showing would be a consumer version). This is a change from the previous prototype versions that were shown in January to a select group of journalists.

The HoloLens may be the first true piece of AR hardware when it hits the shelves potentially this year. This is a brave new world, and we are just now augmenting it.

We will continue to follow the progress of HoloLens, and provide any updates as they become available.