We had a blast at MR Dev Days at the Microsoft campus in Redmond WA. This post provides a recap of this experience.

TL;DR summary:

Super awesome event for meeting fellow MR developers and learning more about the latest technology HoloLens2 is very advanced in hand tracking and eye tracking, but the display and OS need more work Azure Kinect shows great promise but needs streaming

Event:

1. The community has grown! This was the largest gathering ever of HoloLens developers. 400 attendees were at this event, 4x increase compared to 100 attendess at MR JAM at around the same time last year. We met lots of new friends from around the world and caught up with many familiar faces. We attribute this success very much to our community champion Jesse McCulloch.

2. Microsoft has done an amazing job this year putting together a stellar program for the two-day conference. It was packed with lots of useful information covering HoloLens2 hardware, hand tracking, eye tracking and Azure Kinect. Many key people were at the event. Alex Kipman appeared in a mixed reality captured speech in the HoloLens 2 demos, which was very neat to compensate for his absence.

3. The Hands On Lab sessions ran for almost the the entire length of the two-day conference. Although the number of HoloLens2 devices were limited, many of developers had their first experience deploying to HoloLens2 with brand new modes of interactions. This was also a great place for developers to sit together and collaborate.

HoloLens2:

HoloLens2 shows amazing progress on hand tracking, eye tracking and comfort, which we will go over in more detail below. Please keep in mind that the devices we tried were still work-in-progress engineering versions, so we would expect the quality of the final product to be improved further.

1. Hand tracking: many hand gestures are now available to make the experience feel more natural. For example, you can simply grab a holographic object and rotate it with your wrist. Buttons are now meant to be pushed / poked with our fingers under the near interaction paradigm. The classic air tap and pinch are still available for backward compatibility, except now air tap is mostly reserved for far interactions. Pinch and drag has become much faster and more accurate than the original HoloLens, all thanks to the new DNN chip.

We performed several speed tests related to hand tracking:

We attempted to play a simple song on the sample piano in MRTK. The speed unfortunately was still a bit too slow for anything faster than 1/4 note

We attempted to type on the keyboard with 2 index fingers. The speed was okay on distinct keys, but slower when repeating keys (e.g. pressing “u” button twice).

We tried to draw in mid air with HoloLens 2. We were able to get more accurate drawings with more responsive hand tracking, but could not write legible letters smaller than 5 inches in diameter due to tracking inertia.

We held out our hand, opened and closed the gaps between our fingers, and observed the holographic finger meshes follow our fingers very closely with 0.3 second delays.

2. Eye tracking: eye tracking feels truly effortless and magical when implemented right. For example, you could zoom into a map and scroll in any direction just by looking. While Microsoft team warned us about the accuracy of eye tracking “saccades”, we found it quite accurate when looking at objects larger than 5 inches in diameter. We did notice a consistent tracking offset error when looking at the heatmap, i.e. the centre of the heat spot appeared 3 inches below where we were focusing.

3. Display quality: We were shown an engineering version of HoloLens2. The field of view spanned almost all of the physical glass which is amazing in itself. However, the display quality was disappointing in its resolution and color accuracy, noticeably worse than the original HoloLens. Several Microsoft staff assured us that the display quality of the final version would be on par with the original HoloLens. Our observations:

The holographic slates of standard Edge and Settings apps appeared pixelated.

When we filled our field of view with a white holographic object, the colour appears pink within 1/4 of the horizontal FOV from the edge.

After struggling with the display quality for a day in the hands on lab, when we finally put back on the original HoloLens, we felt a wow effect and a sense of admiration…

4. OS and MRTK: We had the opportunity of deploying some example scenes from the MRTK release candidate onto HoloLens2. Most interactions worked smoothly and brilliantly, but we do have several critical feedback for Microsoft OS team:

Bloom gesture: bring it back please or replace it with another unique one-handed gesture. Replacing an already learned one-handed bloom gesture with a two-handed pulse tap means more user effort. In addition, HoloLens2 failed to recognize my wrist many times when I tried to bring up the menu. This could be due to my dark color jacket sleeve.

Fingertips: We wanted to try to use both index finger and middle finger for typing on a keyboard, but couldn’t access the code or prefab in the MRTK to do this. What’s the point of showing fully articulated hand tracking if we could use only two fingers for interactions?

Dead spots between near and far interactions: For example, we poked the buttons on the Settings slate several times. The button frame shown animated lighting, but the window didn’t go to the intended tabs. We encountered this issue several times. In the end, we learned to step back a little and used the air tap under the far interactions to click buttons, which worked much more responsively.

Input confusion: sometimes people can be less happy with more choices. For example, when a use tries to reposition a Setting slate, he couldn’t remember which gesture to use and where to touch. In order to move the slate, we tried grabbing the slate from the top and the right. In the end, we found that pinching the top bar worked the best. Because of multiple possibilities of interaction, first time users need tips to remind them what to do.

Keyboard: The transparent sprite keyboard worked okay, but it appeared too close to our face and was rendering on top of our hands. We would much prefer to see the hand mesh occluding part of the keyboard when typing, to give us a better sense of where our holographic fingers are.

5. Launch title apps: None. We didn’t see HoloStudio, RoboRaid or Fragments on HoloLens 2. Given that the OS shell interactions are still being refined, we weren’t surprised. Luckily, most existing apps could still be run with air tap and gaze.

6. Developer edition: Don Box announced the developer edition with $99/user/month pricing, but no date. Also included is a $500 Azure credit. This is a very nice gesture from Microsoft to the developer community, many of which felt left out after the MWC HoloLens2 announcement targeted at only large enterprises.

7. Timing: Previously, following the MWC announcement, we were hopeful to see HoloLens 2 shipment in June 2019. Based on points 3 to 6, we can make an educated guess that shipment to general public won’t happen until October 2019. This would still be faster than the timeline of shipping original HoloLens (announced in January 2015 and shipped in April 2016).

8. Easter egg: Among many other hardware breakthroughs, Charlie Han revealed a very cool feature of HoloLens2, the almost invisible microphone at the bottom of the visor glass. These help to pick up speech commands in noisy environments. With over 50 developers in the Hands On lab, we were still able to use voice commands to navigate through the Settings menu. We very quickly took this for granted, but we definitely remember the stressful times which arose out of original HoloLens’ non-response to our voice commands at conferences and trade shows.

9. Deployment: We were able to deploy our MRRox app to HoloLens2 with ease. Although we had to switch from x86 to ARM for the build, it didn’t give us any significant error. The app was backward compatible with air tap and gaze, allowing us to use the app even before updating to the new interaction models for HoloLens2. Most functions were working properly as before.

Azure Kinect:

Azure Kinect was that briefly mentioned child at MWC, but it shone like a star at MR Dev Days. Its only Archilles’ heel is the lack of streaming.

1. Hardware: Azure Kinect is a major upgrade from the previous Kinect, with much higher resolutions of 3D depth mapping and RGB camera. The whole device felt very polished.

2. SDK: We tried the viewer app built from the open source SDK, and it worked beautifully to demo the core features and settings of Azure Kinect.

3. Body tracking: This stole the show on Day 2. We were blown away by the capabilities. The body tracking provides 3D bones with positions and rotations for each joint. The presenters comfortably performed a live demo on stage without fail. In the demo, the presenter was captured in 3D and his animated avatar was rendered from 4 perspectives simultaneously. The model fitting appeared very accurate. The model has also addressed unobservable joints with predictions very nicely. During the entire demo, we didn’t notice any jittery or oscillating animations, meaning that the discontinuities and boundary conditions were well handled.

4. Streaming: Azure Kinect is not equipped with Wifi or CPU. As a result, it cannot directly stream its sensor data to the cloud. This means it falls just short of its namesake ambition. While the device itself has a very nice form factor, it remains impractical to tether each one to a high-end GPU laptop for wall-mounted use cases. To cover several rooms with multiple devices would require fixing wires to the walls and connecting them to a beefy central server.

Like this: Like Loading...