There has been a sense for a while now that Oculus might release some sort of input device, especially after the company acquired Nimble Sense in December. Rumors had begun to swirl around the Oculus subreddit that the company might be releasing one at GDC next month. Well pretty much all hopes of that happening were laid to rest in a comment from Palmer Luckey:

“Don’t get too hyped on the possibility of seeing anything at GDC. VR input is hard – in some ways, tracking hands well enough to maintain a sense of proprioceptive presence is even more technically challenging than getting perfect head tracking. We will show something if and when we get it working well, but we have to avoid showing off prototypes that are not on a clear path to being shipped at the same or higher quality level. Throwing together very expensive or impossible to manufacture prototypes for internal R&D is one thing, using them to publicly set expectations around the near future is another. Not naming anything specific here, but the history of technology is littered with the corpses of companies that overpromised and underdelivered by shipping real products with real limitations that were glossed over in promotional materials. Oculus can’t afford to do that.”

There are a couple takeaways from this comment. First, we probably aren’t going to see anything input related from Oculus at GDC. Second, Oculus is remaining steadfast in its ‘we get one shot at this’ mentality, which is a good thing. The mass public can be a fast and harsh critic, and a bomb from them on any front would be a blow to the progress of the industry as a whole, so it is important that they get it right despite cries for a release. (That being said, I’ll take a Crescent Bay any day)

The input question is one that is trying to be solved by a large number of people. Sixense’s STEM system, a Kickstarter backed input solution, is one of those. The company will be showing off their latest iteration of the STEM system, which is a full body tracking system created with VR in mind, at GDC in March. This will be the company’s first time exhibiting on a show floor, and eager backers are still waiting to receive their kits.

Sixense won’t be the only one’s showing off a solution at GDC, Perception Neuron will also be debuting their solution at an invite only event. PrioVR and ControlVR are similar solutions that also had successful Kickstarter runs. YEI Technologies, the company behind PrioVR, will have a presence at GDC but has not confirmed that the PrioVR will be there as well. These solutions all offer full body tracking, but also require the user to strap a number of peripherals on before going in, which won’t be something that the everyday casual user will likely want to do.

Other solutions like Leap Motion, which works on technology similar to Microsoft Kinect, and TrinityVR which uses a more Playstation Move-like approach have also been tried, but neither of those solutions is complete. Even systems like the Cyberith Virtualizer and Virtuix Omni, which allow for omni directional walking tracking, or WorldViz’s multiple camera approach that allows for walking don’t really put all the pieces together to achieve a 1:1 presence inducing input experience.

One of the obvious answers that people will point to when looking at why input is not perfect yet for VR is a lack of haptic feedback. Companies like Tactical Haptics, Miraisens and Ultrahaptics have looked to different ways to give that sort of haptic feedback in control device; and peripherals like the KOR-FX vest have also been developed to help add that sense to the experience. These solutions each have their merit (although having the Ultrahaptics system on in your house would significantly limit your ability to speak into any sort of microphone) but none of them truly lets you feel, for example, a virtual apple in your hand. Haptics as a technology still has a way to go, but it’s a field that will be closely tied to VR in years to come, but it is not the thing that is holding up the development of the “perfect VR input system.”

What is truly holding up the creation of that, is that it inherently might not exist. In the real world, we interact with so many different objects in so many different ways. Therein lies an issue at the heart of any singular input device, there simply isn’t one that works like a perfect skeleton key for every situation. Each device has its situational merits, and some come close to having a nearly universal appeal, but none of them truly create what University of Barcelona researcher Mel Slater defines as “plausibility illusion.”

According to Slater there are two levels of immersion in VR, the first is that sense of ‘presence’ that feeling that you are in a virtual world. This is the level of immersion that we achieve in the best VR experiences out there, that sense where the virtual world takes over your real one. But there is a level beyond this, a level that researchers and developers are striving to achieve universally, and that is what Slater calls the plausibility illusion or what might be more colloquially referred to as ‘true presence’. This is when you have the impression that the situation is actually happening, so that when you pick up that virtual scalpel to cut into the virtual patient you feel that it is actually a true scenario that is taking place.

This is a level of immersion that can only be achieved when a convincing visual is met with a perfect input solution for that experience. For example, in a study conducted at the Virtual Human Interaction Lab at Stanford the researchers used an input device that felt like a chainsaw as they instructed participants to cut down trees in the rainforest. That increased level of immersion led to the participants who went through the fully immersive experience to use 20% less paper over the period of observation.

In order to achieve a more fully immersive experience we will have to pick the input device that is right for the situation, at least until the someone comes up with a solution to the monumental task of creating some sort of universal input. To me it seems that the most likely candidate to do this is likely one that is both far off and potentially still existing in the realm of sci-fi (but not for too much longer), brain machine interfacing. Reading leading theoretical physicist, Michio Kaku’s new book The Future of the Mind one can see that there is a bright future for brain technology, and it is not out of the realm of possibility that within the next decade we will have the ability to interface with VR with our minds, and within the next quarter century potentially have the ability to have feedback from that brain machine interface (i.e. it sends the signal to the part of your brain that tells you theres an apple in your hand when you are holding one in VR, and tells your brain that it is feeling its weight). It’s a far off future but being able to plug in, Matrix-style, into a virtual world is not one that is out of the realm of possibility and may be the only way to truly create that plausibility illusion (but lets just hope it doesn’t involve a giant needle to the brain).

There actually are some experiments being done in this space already, and even some VR demos that exist that use brain-machine interfacing. I had a chance to try one such demo at an event in San Francisco. Using a Muse EEG headset and an Oculus Rift DK2, I was able to “calm my mind” to lift a set of rocks in a virtual world. When my mind “became more cluttered” there would be swirling winds and other signs showing your mind is wandering. I place these in quotations because it truly is difficult to get a sense of how well the current technology was working, and how much of it was… in my mind (sorry). Either way, it is a simple glimpse into what is to come because it definitely did have some effect on the experience, still though it isn’t yet at the place where it can be a reliable control input.

That is ‘years, not months’ away, and with consumer VR’s debut “months, not years” away we likely won’t see it come with a singularly ideal solution for input. So until that exists, consumers will be faced with a tough decision as to how much money they wish to invest in their VR setups. There will ultimately be a small section of the consumer base that converges everything, using a STEM, an Omni, and other peripherals to try and create a very immersive experience that works well for a specific segment of experiences.But the majority of consumers will look to one, to two input devices that work and we will likely see a lot of traditional controls adopted in VR initially (as we have already seen with the Xbox 360 controller). The perfect solution is still a ways off, but what we have now is a decent start.