The interactions in mixed reality should not only be intuitive and fast, but also be fun! How magically can an operating system be? How to start up an App at least?

Note: This is a translation of my original Post I did in german.

With everything I’ve read about Magic Leap, it is always emphasized how important the magic is in the experience. I took a look at the Magic Leap Videos and public patents and tried to find out what to expect from Magic Leap here.

Login on the operating system

The glasses should be able to realize that they are sitting on my nose and I want to use it now. I should then be welcomed with a start screen on which I can identify. This start screen should not be a flat “screen”, but can be floating within in the room, my eyes could move along a grid to unlock the device.

Actually, it should work even more easily. Since the movement of the eye is necessarily already tracked, the pattern of the retina can also be used to immediately identify and log on.

In the best case, I only put on my glasses and am in my mixed reality environment.

How do I start an app now?

I see a classic menu in a white window and think “D’oh”!

Really! My feeling tells me that this is not only the uncooler, but also the less intuitive way in MR.

Microsoft has launched a first UI attempt on the main menu for HoloLens. I think this is really only the first attempt.

There is a gesture to call the main menu the “Bloom Gesture”. The user turns his palm upwards and makes a “opening blossom” gesture with his fingers. This is good and easy to learn. The main menu consists of a normal Windows menu with tiles. Totally boring.

The HoloLens is a developer device and not yet a polished end-user version. The good thing is that Microsoft can gain a lot of experience and get feedback from thousands of developers.

(Un)touchable Controls

I’ve been through the Magic Leap patents and found some very cool ideas. I like two ideas as a main menu or App Drawer particularly well.

The user looks at the palm of his hand and several small app icons appear on the fingers. Each finger represents a major category. Use the thumb or finger of the other hand to select the icon by touch. The nice thing is that there is actually a touch feedback, since we touch our own hand.

This concept works best with a smaller number of apps. With more Options, the good old iPod control could be used. Instead of a click wheel, the palm of the hand is simply taken. The icons swipe at your fingertips. And then you can select again by touch.

In the first video of Magic Leap, however, the company from Florida seems to have implemented at least another concept. There is a carousel navigation used, which is swiped with a Handgesture. The different app categories then roll over. The kind of the control comes to me rather inaccurately and is probably still an early prototype. (Maybe they are just Mock Controls made up for this Video)

Mockup of a User Interaction from Magic Leap

I do not want to wring wildly wiping in the air while I sit squeezed in the Subway. The other gestures can be used more subtly.

Unfortunately, I do not know how difficult it is to recognize the gestures on the hands. There is much information displayed in a small space, and the hand is typically also quite close to the body.

The Hololens seems to have problems with it. There, the gestures are always executed quite far in the room in front of the user’s eyes.

With a sensor system such as the Leap Motion, however, the problem should also be tackled.

Totems

There might be additional ways to start an app. Magic Leap has filed several patents to use totems. These are small items that you simply hold in your hand and which do not contain any technology.

A few examples from the patents.

Key tags on which icons of Twitter or Facebook are depicted. Looking at them will open the corresponding app.

At first I found the Idea impractical, but on closer consideration the spatial reference is rather practical. People think spatially and can remember things in the room particularly well. Admittedly the key trailers are still not the most practical, since they are usually in my pocket with me.

There were also bracelets and a ring. The view on the ring could be connected to my WhatsApp and show the last messages.

Above the bracelet, the current score and a few statistics of a football game could be displayed during the game.

In principle, you could connect any object to an app and use it as a trigger for additional information.

Hey Magic Leap!

Then, of course, there is voice control. Launch apps by voice command works quite well on our Smartphones. For this, Magic Leap needs of course a well-functioning speech recognition. But I guess that should not be a big problem anymore. On many occasions, however, one would not like to talk into the room. (Even if a lot of people in the Subway seem not to have any problems with that.) There are already technical possibilities to transmit speech silently, but I think that such a technique will not be part of the Magic Leap glasses. But for the future that could be very desirable. And for the text input, this is certainly also very useful.

Gaze Controll

In combination with an action gesture, I can imagine this as a good addition. Instead of directly clicking an icon on my fingertips, I just look at the icon and then execute a click gesture in the free space. This then corresponds approximately to an abstraction, as by a computer mouse.

I don’t know whether this is accepted by the users who have become accustomed to working directly on the display in the last few years without any additional abstraction layer.

A gaze control without additional click gesture, however, I think is too impractical. “Look 2 seconds at an icon and the app starts automatically”, or it pops up a sub-menu in which you have to start or cancel immediately. There is always anxiety to carry out an unwanted action.

There may be different situations where the gaze controll is helpfull. For starting apps, the annoying factor is probably too high.

In all their videos, Magic Leap does not show at all, how they interact with these Apps. (I’m not counting the first Video from Weta.) They are opening or closing any app without an indication how they have done it. Maybe it is simply a prerecorded 3D demo. Too bad, that Magic Leap is not giving us any hints here.

Text input

THE big question is nevertheless, how can text be entered effectively and without annoying the user? Text input is incredibly important. At the same time, however, also incredibly difficult to implement when there is no haptic feedback. This was initially a problem with the Smartphones. A phone without a physical keyboard was hardly imaginable. Apple has found the right spin, to make typing very reliable and fast. Perhaps we will use initially our mobile phone for text input in mixed reality? At least, as long as you are on the go.It would also be possible to display a keyboard on any surface. This is probably not so bad to use, but needs an area to project onto. This is not necessarily the case in the subway. Unless you have space to take your own arm for it. Like NEC suggests in this Demo.

Otherwise the keyboard would have to float in the air. This is probably a slow and imprecise form of the text input. But for short commands it may be feasible.

Perhaps the tapping in the air can still be connected with the position of the gaze. This may make the detection much more accurate.

There is probably not only one possibility for text input, the operating system keyboard has to be customizable for different usecases.

The advantage in the three-dimensional world is that the size of the keyboard is not limited by the size of a mobile phone display.

In order to enter a URL, a larger keyboard can be displayed, in which all special characters have their visible place. This may make the input as fast as on a 5 inch screen. I am really curious, how Magic Leap tackles these Problems.

HoloLens has so far also only pairing with a Bluetooth keyboard in the offer. This should definitely be possible in Magic Leaps System and also the pairing with a Smartphone for Textinput.

A lot of nice Pictures but no Control?

So much for the main menu and the launching of apps. A lot of possibilities and probably technically also a few impossibilities. The whole patents do not mean that Magic Leap is doing any of these.

In the user interface design for VR and MR apps we are probably on the stage of the 80s with PCs. There was a command line as a generally accepted form of interaction.

It looks to me a bit as if Magic Leap avoids showing the whole UX thing because it is to early to have great ways of doing it.

In the first video there was still a kind of gesture control. But this one seemed to me to be more made up and pretentious. In the recent videos, only impressive pictures are shown, but the control of the apps is spared. This point is however enormously important and perhaps Microsoft with the HoloLens is already at the technically feasible at this time.

There are a lot of possibilities, but what is accepted by the customer is perhaps not yet invented.

I would not have imagined two or three years ago that young people keep their smartphone horizontal to the chin and then speak in there. However, this is the socially accepted version to speak a voice message or a voice command in their mobile phone.

Who knows, perhaps it is even easier for us to talk in public with our device when it is in the form of a small Tinkerbelle hovering in front of our eyes. Maybe that would be even more strange.

Hopefully we will see the approaches Magic Leap is taking in a few more month.

What kind of Interaction would you like Magic Leap to implement? Big gestures like in Minority Report or more subtle Gestures?