First Impressions of a Gesture-Based Interface

As I put on the HoloLens, a 3D spacecraft appears before me. I walk around the semi-transparent structure and peer into its internal mechanisms. Seeing first-hand how ProtoSpace brings data into the physical world (and away from the 2D screen) revealed to me the vast possibilities of immersive technology. I could envision mixed reality transforming the way we work, particularly in domains like engineering, architecture, and scientific research.

I was delighted to be able to walk around the object and manipulate it directly with gesture control. The small set of gestures enabled me to quickly learn the interface and to explore the system functionalities.

However, I noticed some limitations to gesture interactions:

1. Demanding precision from users. The airtap gesture demanded precision from users, and often required fine motor control, which can be exclusionary. For this reason, it doesn’t seem ideal to offer gesture control as the sole interface.

2. Lack of feedback. Gesture controls lack physical affordances, which means the interface needs to communicate clear and immediate feedback. This becomes a problem when users don’t know they’ve activated a certain state (intentionally or unintentionally).

3. User fatigue. With HoloLens gestures, users are required to raise their hand over their heart. This can be fatiguing when combined with active discussion and constant standing, as was the case with ProtoSpace.

4. False positives. Hand gestures that occur naturally in conversation were sometimes misinterpreted as system controls. Though unavoidable, false positives can be minimized through design.

5. Inability to support complex interactions. I found the core gestures easy to remember but had trouble with more complex interactions. For tasks that required multiple steps (e.g, navigating through submenus) or precise interactions (e.g., rotating an object 35 degrees), gestures were not ideal. The human hand moving across 3D space is not very precise.

My team and I concluded that gesture controls alone were not enough to support the needs of NASA’s engineering teams. We considered voice input as a secondary interface, but the disruption it would cause in a group setting ruled out this option.

We discovered that combining gestures with a physical controller could help engineers carry out more complex tasks. The controller minimized some of the problems we observed with gestures — for example, users could access submenus with the controller rather than having to navigate through the menu using gestures. For ProtoSpace, our team recommended using the Nintendo Joy-Con, which can support a wide range of interactions and more precise ways to rotate and scale 3D objects.