I’ve been prototyping and developing VR experiences since we received our Vive in early August. I’m a long time web and mobile developer but this was my first time really getting my hands dirty in Unity 3D.

I’m currently working on building a space strategy game and many of my learnings come from my development of that. See some images for my game here: http://www.twitter.com/starlight2249/media

VR is challenging

As any VR developer can tell you, one of the hardest things about designing VR applications is the lack of standards and best practices. With so few VR apps out today, there’s a good chance your app is breaking new ground and you will have to spend a lot of time thinking through basic interaction methods. This includes things like: “How will my users pick up and use objects?” and “How will they move around?”

In much the same way that multitouch interfaces created new ways to interact with our computers, VR will enable entirely new types of applications. However, unlike multitouch, VR shares very little in common with existing mouse based interaction methods. So, things like buttons and menus may need to be completely re-thought (or even abandoned) depending on the kind of experience you’re looking to create.

VR also introduces the concept of immersion. It’s one of the first mediums that can make you feel like you’ve been transported to another place. But, that immersion is easily destroyed with poor UI’s or clunky control schemes.

I wanted to share a few learnings, observations and maybe even some emerging best practices. Here are a few of the things I’ve learned through trial and error, or that I’ve noticed in other VR applications.

Locomotion is not yet (and may never be) a solved problem

The Vive allows for ‘room-scale’ experiences, meaning you can walk around within a 2 meter area. Traditional game worlds are much larger than that. To solve that problem, people have tried many things, like allowing the user to control his movement with a joystick. What they’ve found is that any attempt to artificially move the world around you can cause motion sickness in some people. To imagine what that feels like, picture someone spinning the room you’re currently sitting in around while you remain stationary. The only technique known to work for everyone without causing motion sickness is to move the person to another place instantly (which we call ‘teleporting’). This carries with it the downside that it’s not really realistic and it can be immersion breaking.

Another option is to have no locomotion. Games like Job Simulator just keep you within a box and bring the world to you (in the form of characters that come up to you asking for different things).

Over time we’ll see more consensus about what types of locomotion and experiences people prefer but it doesn’t seem like there’s going to be any magic bullet that solves motion sickness any time soon.

Sketching: Forget the rectangle

Typically if I want to sketch some ideas down, I start by drawing a rectangle and use that to help me figure out what goes on a given screen in my application and how big it will be. In VR, there is no rectangle. A user can look anywhere they want in a full 3d scene. There is no way to contain that in a box. Instead, I find it more helpful to start by drawing a stick figure of the person, doing some action. In traditional 2d apps, everything is relative to the size of the screen. In VR everything is relative to the viewer. So it helps to start by thinking about what the user is doing and experiencing.

UI: Put controls where the user needs them, when they need them

A definite trend so far is to make interfaces spatial, three-dimensional objects, as opposed to flat billboards that you interact with. If a user needs to do some actions on an object, have those options appear around the object when the user gets near it or touches it. Fixed UI’s and HUDs attached to the user’s head just don’t work well in VR since they tend to fly around as the user tries to interact with them. Also they can get annoying and distracting.

UI elements can be attached to the controllers or on a virtual ‘belt’ on you body, but keep in mind that the user has to look away from the action to access these so it may not always be appropriate.

UI elements that appear in front of you but stay fixed in the world seem to work well for things like notifications. Some other things that work: Meta representations like coloring the user’s view to indicate the user’s status or placing outlines around objects you can interact with.

A bit of balancing is necessary since we don’t want to fill the 3d space with UI elements all the time. It tends to distract, and breaks immersion. Most experiences so far have taken a fairly minimal approach to UI.

Gestures can be cool but they aren’t obvious

Flinging an object away to delete it is cool and simple once you know about it, but don’t count on your users figuring that out without some explicit instructions. Provide alternate ways of doing things (like a close button) for people who haven’t learned the gestures yet. Simple is often better than novel, especially for your novice users.

Traditional UI’s still work fine

You can still do a flat canvas where the user makes selections by treating their controller like a laser pointer. You should take care not to put UI’s too far away or make them too small — people’s hands naturally shake a bit (try it with a real laser pointer) and it can make clicking on small UI elements tough.

Almost every game with a ‘settings’ screen still defaults to this kind of 2D UI.

We need new ways to direct the user’s attention

Often times the user will be looking somewhere and will miss cues that they can take an action somewhere else. Sounds, arrows, lines and lights can help direct the user’s attention, though it seems that this is an area where solutions are still evolving.

Don’t show what you can’t track

VR developers seem to agree that you shouldn’t show the user’s body or arms, since only the controllers and head are tracked. People are conscious of their own body position and it breaks immersion if you see your elbow pointed left when it’s actually pointed right. A best practice is just to show the user’s hands or items they are holding. It’s ok to show bodies of other players; that won’t break immersion since you have no idea how they are posed in real life. Hopefully, future VR systems will provide a greater degree of body tracking, since it will help create much richer social experiences and immersion.

Testing is cumbersome

Currently, I can only write code without wearing the VR headset but whenever I want to test something I need to put the headset on to try it out. This leads to a lot of awkward fumbling which definitely eats into productivity. I’ve created a Unity script that can allow you to work within the Virtual Desktop application, so you don’t have to keep taking off your headset when you want to write some code and test it (get it here).

Representing your work is difficult

Screenshots do VR experiences very little justice because they don’t convey a sense of scale or presence (two of the most compelling features of VR). Mixed reality videos, which feature live video of people super imposed over the virtual world, can help a great deal but this also means a lot more work and planning needs to go into putting together promotional materials.