VR! The kids are INTO IT! We’ve been so focused on VR over here at Filament Games I can’t be 100% certain I’m not typing this with goggles on right now.

It’s been really exciting to cut our teeth on our first round of VR projects, and it’s led to a lot of interesting rumination on how to approach the design of a learning game in VR with the ever-shifting hardware capabilities across the entire spectrum of VR platforms.

I thought it would be nice to do a quick overview of how we compare devices in terms of commonalities and contrasts, and then move onto overall design philosophy.

So let’s do that!

How do the capabilities vary?

There are three main axes of VR interaction that are most critical for structuring your approach to VR: computing power, hand capabilities, and spatial capabilities. I’ll dig a little bit into each.

Computing Power

Are you running a Rift or Vive on a two-or-three-grand PC rig? Well then you can get away with more complex environments, characters, and interactions. You still need to be respectful of frame rate and make sure you can deploy on the lower end of PCs, but developing for the core gaming machines is a different beast than figuring out how to squeeze your performance out of mobile-phone based computing like Google Cardboard or Daydream.

Computing power is of course an ongoing dialogue, with every new device released bringing either new capabilities or new demands. But it’s easier to draw a line in your thinking between the Rifts and Vives being in the higher power camp, and mobile devices residing in the lower power.

If you’re sure you’re deploying on a lower power device, try and think about how simpler environments and simpler interactions can be benefits rather than drawbacks. Simpler doesn’t necessarily mean bad- it might also mean more universal, more usable, and more approachable for new users!

Hand Capabilities

Both the Vive and Rift offer hand devices that emulate, in differing ways, the element of touch and hand control. The competing mobile platforms instead rely either on “gaze” mechanics (where staring at an object in the environment can activate it), a button on the goggles/phone itself, or even the use of a paired video game controller.

These matter a TON! If you’re unsure which device you’re going to use, you definitely need to try all of these out. Particularly important are the differences between the Vive and Rift controllers- they may seem roughly identical, but at least at this moment the Rift controller offers (what I think to be) superior tactile interaction, but inferior hand tracking in terms of the sensors losing your position more often than the default Vive setup. So a game where you are trying to eat spaghetti on a plate that just sits in front of you might be better for Rift, where a game that involves waving sparklers in the air to spell words might be a better Vive experience.

This is definitely a moving target. Ideas about how controllers with mobile devices can afford compelling THIRD person VR experiences are exciting, and nuances of gaze might not just be a placeholder, but a commonly threaded subtle interaction throughout ALL VR in the coming years of development. Exciting times!

Spatial Capabilities

I touched (ha!) on this in the above section, but the Rift and Vive offer an understanding of where you are in a space, with sensors tracking your location. The mobile solutions place you in a more stationary set-up, with you able to look around from your “fixed” location. This location can of course move, but via in-game controls or controller-driven action, not by just waggling your face around like in the Rift or Vive.

Spatial capabilities really impact your sense of immersion, but you should think about your specific game features to see if spatial capabilities are necessary for your mechanics. A game involving ducking, crawling, or bobbing and weaving might *need* this, while other games might not.

How are they similar?

Regardless of the contrasts between different VR platforms, they still have a lot in common. Dan White wrote a great piece about how the overall platform of VR impacts gaming and learning– this stuff holds for all devices. Simply put, all of these devices offer novelty, immersion, and deep first-person experiences that impact people as authentic memories. Crazy stuff!

Design Strategies: simple to robust

So, our current device-agnostic methodology is to conceive of each game on the simplest target device, and then consider how to add interactions past the core experience if you wind up working on a higher fidelity platform. This may seem like you won’t be working on a core mechanic that exclusively exploits a high fidelity feature, but the fact that all of these platforms share the core sensation of reality means that introducing an interactive mechanic on top of the assumed core shouldn’t be non-optimal- in fact it’s likely how you’d approach those mechanics anyway, with the team laying down the core environment features before focusing on the custom interactions.

For example, let’s say you want to make a game about the Ice Age. You might decide it will put you in the role of a Wooly Mammoth. Whoa. No matter what device you are building for, you are going to need to spend time on environments, experiment with scale, and pick a style that matches your vision but can be built for VR. If you pivot to a high-performance VR platform like the Vive, you can now think about how the touch controllers can provide things like maybe realistic trunk control, or perhaps a trampling minigame! If you instead stay with Daydream, you can still focus on the immersive spaces and narrative.

It’s exciting times for VR. New capabilities, experiences, and challenges are emerging every day. I can’t wait to see the hardware advancements in this coming year, and the new experiences that break ground over, and over, and over. Thanks for reading, we’ll keep you updated!

Have VR thoughts you’d like to share with us? Send ’em over on our Facebook or Twitter!