Full Report

Yes, this is what the place actually looked like. Photo from Gamecrate.com.

Recently, I wrote a short article giving a taste of the things I experienced at SIGGRAPH 2015. This is the full version. That article was written to be reader and newbie friendly. This one is not.

I wrote a bunch of journals detailing the various things I experienced and learned at SIGGRAPH. So this reading was largely written for myself, and then lightly edited for a narrow enthusiast audience. In other words, if you have not been following VR news religiously for the past year, there will possibly be many things that do not make complete sense. In that case, please Google it or try finding out for yourself what you’re missing, before making a post about it. Another thing is that I sometimes reference my past experiences with hardware such as the Vive, which I’ve written my full impressions on (https://www.reddit.com/r/oculus/comments/3d3ngd/revived_at_comic_con_a_full_writeup/).

Note that I also talk about object presence, body presence, limb presence, environment presence and perhaps other types of presence. Some of these are of my own terminology that should be understood intuitively. Environment presence for example should mean that the simulated environment feels like a real one that is physically grounded, whereas object presence would be more about the discrete entities placed closer to you that you might interact with.

Also, a link to the list of what was shown at the “VR Village” portion of the floor, for easy access: http://s2015.siggraph.org/attendees/vr-village

Note that mention of certain booths that aren’t linked to in the paragraph are probably a part of the VR Village page linked above.

Sunday, August 9, 2015

DAY ONE

The first thing I did as soon as they opened the flood gates was rush over to WEVR because they were running a Vive demo. I’ve tried it before of course, but more experience is always welcome. People were supposed to book appointments though, which I and a lot of others didn’t even find out about until we got to the booths (also found out there would be different demos and exhibits on different days as well). When I got there, apparently the person who booked for the first appointment didn’t show up, so I got to be the first person.

As far as my impressions of it go, there wasn’t any change in them. It’s still how I remember it, and there’s nothing new to point out. Actually, I tried the Vive 2 times today, once at the beginning, as the early bird I was, and then 3 hours later, as a regular person waiting in line. On one time, I just told myself to enjoy the demo. The other time, I specifically told myself to investigate very closely the display, comfort, and tracking, while mostly ignoring the content. Again, nothing really changed for me, and I could reaffirm what I had experienced in the past. Though one thing I did do this time that I hadn’t thought of trying previously was step off the edge. As expected, going by my previous Vive and DK2 experiences, I was able to do so with no problem whatsoever. Of course, I’m extremely afraid of heights in real life, many times having skipped heart beats just at the realization that a long fall would be close by somewhere. The moment I get that feeling in VR for the first time, I will cry, in both joy and fear.

Oh, yeah, and WEVR doesn’t have a CB. They’re also doing a promotional experience for the John Wick VR game by Starbreeze which is not actually a part of the game, just a promotional experience, and apparently it’ll be more “passive” relative to the game (all kind of obvious). It will be developed separately from Starbreeze. With that said, they said they were not sure if it was going to be included with the game or how it was going to release.

In any case, the Vive wasn’t the most interesting thing there; it was the mid-air haptics and light field display, in my opinion. But before that, I’ll say that one demo really surprised me, and perhaps won the show (on the first day) for me. It was the Neuro experience created for GE by Kite and Lightning.

If you thought each K&L experience steps it up above the last, this one did not fail. In fact, it probably gave me the most presence of any demo (this will get topped later on in the week), ever, possibly even more than any of the demos on the Vive, and it ran on a DK2. Well, OK, that last part might be a bit exaggerated, and the high presence only lasted for a moment. It was when I was being shrank down, while everything else seemed to be getting huge, just before you enter into the mind of Reuban. I think something about the scale and intricate detail added something significant in my brain that tipped it over that level, somehow. Of course, a combination of great positional audio, staging, and interaction, added to the experience. The presence I felt was more environment presence than any other kind.

My favorite was that part of course, but almost as impressive was another segment, where you would be exploring the impressively scaled and interconnected weave of neurons, that fire signals off like fireworks. It is really one of the most beautiful experiences I’ve seen in VR. My guide said it looked even more stunning on their Crescent Bay headset. The future is looking bright for K&L now that they’re not limited to producing contract works. And if them having a CB says anything about their connections with Oculus, it may be a very exciting possibility for what they will do with motion controllers like Touch.

Now onto the other fun things. Air haptics (through ultrasonics: http://s2015.siggraph.org/attendees/emerging-technologies/events/midair-touch-display) was something I was very interested to experience. As others have described, it feels most like a pin point of gentle blown air. However, I also think it feels partly like an electrical jolt just at the surface of your skin. It’s surprisingly effective at conveying a sense of having touched something, but not completely of course. Another thing I saw, but didn’t try because they were having bugs, was what looked like an illusion of haptics by visual cues. It was basically AR handtracking, but when you tried grasping an object, your virtual hand would not visibly go through it, even if your real hand did. The idea here is probably that your brain would be conditioned not to violate convincing tracking. I suppose, if this concept could be combined with air haptics, it could be a very effective stop-gap to the kind of haptics that trackers like Leap Motion lack (gloves are a different story). The problem is that air haptics aren’t realistic for this yet because it requires a huge array of transducers for the ultrasonics.

And finally, we have the light field display that has been clickbait articled to death about solving nausea. I talked to the people running the booth, and of course it’s not as those articles would make the purpose seem. They are targeting the fatigue you get when you look at things close up in traditional VR displays, necessarily implying they don’t think it’s a huge problem if we’re all limiting ourselves to experiences that don’t get close, but that would be no fun. So how was the display? It worked — kind of. It isn’t a fully accurate light field. It’s a computationally expensive generation of a light field, translated to two flat displays as an approximation. I think they said there was about 25 views rendered per eye to do this. You can learn more about it by reading their documentation. Other people have said similar things and have already talked about the possible limitations of such a display, so I’ll only talk about my subjective impressions about what it looked like. Note that I’m slightly near sighted, and did not wear glasses. They had an orientation tracking HMD (that would remain frozen at seemingly random times). The majority of the projected image itself was sort of blurry, but I did notice that when I focused on things farther or closer, the other objects would get slightly more blurry, so it definitely worked. Otherwise, I think there was a problem with the calibration. The headset just did not sit well on my face. It could also be that for the demo, their light field approximation to the double LCD display system was not using the right calibrations for the calculation.

In the demo, you could turn light field mode on and off to see the difference it makes. With it on, it was as I described, but with it off, the scene would run at a much higher FPS (but it was still terrible, like worse than DK1 terrible), and you would see noticeable aliasing, whereas the blur from the light field probably masked the aliasing and hid the low resolution, even though stacked LCDs should give you higher resolution by default. I’m not sure if it was the difference between low resolution plus aliasing, and blur, but it seemed to me that with light field mode on, objects just seemed more believable and realistic. To be fair though, when I manually focus my eyes so that I blur the image when using an HMD like the DK2, it feels much more like I’m looking at something real, anyway. This could be because my brain knows what defocusing a scene looks like generally, but not accurately (or perhaps blur isn’t linearly perceived; excuse me as I have no expertise in this subject), so when I do that with a flat image, my brain thinks the image is acting like it would a real one, therefore convincing itself that what I’m seeing is real. Still, I’m wondering just how much exactly accurate vergence-accommodation linking could contribute to presence. I think we can only find out as the technology progresses further, or as reliable studies get done, but that may require the technology to be at a certain level. And, again, it only really applies for vision at a relatively near distance.

I was also surprised at how fun very simple forms of face tracking can be. There was a demo (http://s2015.siggraph.org/attendees/emerging-technologies/events/affectivewear-toward-recognizing-facial-expression) of upper face tracking, with a sensor array that works both inside a DK2, or mounted on a pair of glasses, that also approximated lower facial expressions to combine for discrete emotional representations — i.e. happy, sad, laughing, and anger. There were detection and calibration issues, but when it worked, it was a fun little thing, and I could imagine, if nothing else more advanced is available, that it would enhance social VR applications. The VR demo was basically Unity-chan trying to mirror your face. If it worked much more reliably, and came in a user-friendly package you could just stick into your DK2, I’d buy it in a heartbeat, at the right price. Remember though that it’s not really tracking accurately subtle facial movements. It only tells when you are laughing, smiling, etc — not sophisticated, but still fun, for me.

Another demo I tried was a Meme-made electrooculography (http://s2015.siggraph.org/attendees/emerging-technologies/events/meme-%E2%80%93-smart-glasses-promote-healthy-habits-knowledge-workers) wireless glasses frame. It didn’t detect too well for me either, but apparently their newer version was better. It had some interesting applications, like automatically blurring your monitor after not having blinked for a certain amount of time, to encourage healthy eyes. The tracking is quite low fidelity, as far as I understand, which is why they said it can’t be compared to stuff like Tobii’s eye tracker, but it may still yet have some use if it’s improved enough. Current eye trackers still should be improved in how fast they track your eyes, especially for saccades, and electrooculography should be very fast since it uses electrical signals. Therefore, some clever sensor fusion and prediction could possibly make for a very VR friendly eye tracking system. Or perhaps it doesn’t work like that and I have no idea what I’m talking about (this is probably likely)!

There are a few other things I got to check out, but nothing that really caught my attention as much as these. Some others, like the redirected walking demo, had bugs that couldn’t be addressed in time, so I couldn’t try them.

DAY TWO

The first thing I tried was a Morpheus demo (http://s2015.siggraph.org/attendees/events/mighty-morphenaut-multiplayer-collaboration-vr), which was apparently only showing on this day. It was really quite fun, and demonstrated a telepresence application with a robot in space. It had multiplayer, but I don’t think it was meant to demonstrate social presence as much as it was to show that you could cooperate with others simultaneously in that environment and situation. It didn’t really give any sort of presence for me, but it was still fun. I find almost any demo with physics interactions using good motion controllers fun. In the demo, we would do stuff like press buttons just by moving our robot hands, grab vials by pressing the controller triggers — which consequently made the robot hand clench together — pass those vials to each other, put them in narrow compartments and juggle them around in zero gravity, and finally we would try holding up a long piece of some sort of spaceship part together. That last part went OK but would be vastly improved if there was force, or at least torsion feedback (I’m looking at you Tactical Haptics).

After that, you could try a more realistic situation, in which time lag would be simulated. This revealed how challenging living with lag could be, and you find yourself moving very slowly, learning to never do any action that would result in risk of reacting too slowly (objects would float away to their literally unreachable doom). For example, I could no longer toss one of the canisters to my other hand anymore, not reliably. You had too keep holding onto it, position your other hand over it, grasp it, and then let go of the other hand’s grasp, all in motions done separately. A quality of life thing that improved this contextual application was being able to see a ghost representation of your robot’s hands moving without latency, so you would see the actual ones being dragged along a second later.

The next demo was redirected walking (http://s2015.siggraph.org/attendees/emerging-technologies/events/making-small-spaces-feel-large-infinite-walking-virtual). One of the demonstrators said it was 7x7 m for the space apparently, but it seemed that they were tracking a space slightly smaller than that. My experience with it wasn’t perfect, but in the end, probably not bad at all. At times, I did notice that I was being turned in a harsh way. When I was walking in a direction, I would feel myself walking straight, but getting turned in VR. Maybe 1/3 of the time I spent walking had this noticeable redirection. The other times, it wasn’t noticeable. When it was noticeable though, it didn’t really bother me, I think. I didn’t feel any hint of nausea from it, nor did I feel bad consciously correcting for the harsh turning. Some factors like visual and vestibular noise could explain this. In all honesty, I feel that I could live with it, especially given the benefits of actually being able to walk in VR without needing a football field of space. The real question is if the conscious correction for when you’re noticeably being redirected can become subconscious, and then if that truly adds to presence.

And then I finally went to Holojam’s station. I actually tried it twice that day, mostly because the first time, I had a lot of demo breaking tracking and connectivity bugs, and finally an overheated phone, so I wanted to give it another chance. That other chance turned out great, though not perfect like the Holojam video demonstrates. When it worked, it still had very noticeable tracking latency and judder, as well as limbs popping on and off, and flying over the place, but that didn’t happen too much anymore. I was able to draw all around in 3D space, see my feet, my hands, and the controller, and other people in the same space. The body parts weren’t where they were in real life, not exactly, because they weren’t calibrated. I confirmed this by touching and being touched by a fellow Holojammer. Though, our hands were close enough in their real positions that we could sort of clumsily high five each other. I also traded and duel wielded the paint brushes with him. I tried playing hop scotch with my drawings (which didn’t work because my virtual feet, again, weren’t calibrated to the same position as my real ones). We created practically a room of spirally clutter, which was fun to look around in. Perhaps one problem with the social experience was that hearing people was hard. For me at least, it was hard to link what I was hearing in real life, with the avatar in VR. We weren’t wearing headphones. But somehow, it felt like my hearing of other people was being muffled or confused. Perhaps it kind of was though, since the HMD might contribute to a subtly off HRTF. I don’t think I got any presence from this demo — maybe the slightest bit of social presence, but that’s it. The tracking was really just not up to snuff for the demo.

So that’s the really new and notable stuff for today, but I found out a few things that I didn’t the first day. I went back to the light field display booth, and hung around a bit. I found out that transparent OLED displays may exist and be used for it, but it sounded like the person didn’t know for sure. I also looked a bit more at the demo displays and found that I sometimes had a hard time focusing on a certain depth in the light field ‘scope. Actually, it seems to me that each LCD has it’s own screen door, and this causes your eyes to want to focus at the screen door’s depth more than they want to focus on depths slightly beyond the one with the screen door. In addition, the screen door now looks like it is attached to objects at the same depth, as if it was physically there in the world rather than a general visual filter over your vision, but this could be due to the low resolution panels being used, and this effect might go away with higher resolutions. It’s a subtle effect.

I went back to the haptic touch through visual constraining demo thing, and found out that I was wrong about what they’re trying to do. They were actually just visually restraining your grip and letting the pinch of your own two fingers act as the haptic feedback. The idea here is that the object is small enough that when your fingers are visually constrained, they do not feel like they are moving further than the constrained position. I tried the demo, but I couldn’t identify any strong feelings. Unfortunately, they were using one of those Sony HMZ HMDs so there wasn’t much presence in the first place. They really need a modern HMD and Leap Motion or something. At least they acknowledge that further investigation is indeed with the use of those technologies.

Actually, I was conflicted on whether or not I should report on their exhibit, because like many of the other booths at SIGGRAPH, they just weren’t doing stuff with really cutting edge HMD technology that could leave an impression. I decided that the idea of visual haptics was worth discussing.

Anyway, I think I learned something that I suspected but didn’t look into, which is that a wide, full field of view is pretty important for the feeling of presence. The periphery of the eye is much more easily tricked that what it’s seeing could be real than the more discriminating fovea. More periphery comes probably with more presence. I say these things now because there was a dome projection art display that I decided I would check out randomly, and it was incredibly immersive, by occupying nearly my whole periphery, when I looked at the center. I have yet to try something like StarVR or other wider FOV HMDs though, so perhaps it doesn’t translate over as well as I would hope, but I’m holding my judgement. Someone should do a study on how much presence you get when you wear a pair of regular goggles in real life, when we can measure presence in humans, and do it easily.

Finally, I’d like to emphasize that I think social VR experiences with highly interactive elements through accurate motion controls can be the funnest thing ever. To me, it brings up things child-like creativity and experimentation. I would love to see Tiltbrush multiplayer. Or Holojam with the Vive. Even if I get little presence, social or other kinds, the experiences are still extremely fun to mess around in, simply due to the freedom that is given by the headset and controllers. (be aware that this is not a paragraph added in after revision of my text; this was here when I wrote my journal on the day of)

DAY THREE

Another interesting day for me. First thing I went to was the tightrope walking simulator using the Morpheus. I hoped to maybe get the deep feeling in my gut that tells me I’m somewhere high. Unfortunately, I still didn’t get any presence, or feel scared from the height, or anything like that. The visuals were pretty though, and I got a great sense of scale looking down and to the horizon of the cityscape.

I went back to the light field display booth again because the Nvidia guy who really knew about the research was finally back, so I asked him about using anything other than LCDs, and his response was that there was effectively nothing in the near term. I confirmed with him that they really are only contributing research for now.

The major Exhibition event started today, so I went to check out the things there, leaving the VR village and other stuff in that Hall behind for another time.

I took note of SMI, and tried their eye tracking for the DK2 as well as AR glasses (by Epson, also at the show; I didn’t find their tech worth reporting on). There was noticeable latency between when my eye moved and when the gaze cursor moved. They said it was 50–80 ms of latency, running at 60 Hz. Their internal prototypes are apparently capable of 2–3 ms latency end to end. They state that they will keep improving the technology for VR specific applications (foveated rendering mainly), and that they have yet announced partners who will work with them. The tracker they were showing is not targeted towards consumers. I forgot to ask if they would be revealing anything soon, but probably not, anyway. Another thing I noticed about the tracking was that if your eye wasn’t centered very well, it would get jittery and less accurate at the periphery, so there’s a sweet spot for eye tracking. They said it has to do with the lens distortion. It’ll be interesting to see how this will change with the different lenses within CV1 and Vive. Otherwise, the tracking just worked. You didn’t need to calibrate it, although calibration gave you slightly better accuracy. For anything but foveated rendering, the technology works effectively and I could see myself extremely wanting one of their DK2 eye trackers if only it didn’t have the “professional” cost.

So after that I checked out some other stuff. I found out a few things I didn’t know before about a 3D audio plugin (which worked pretty well), and Stingray, at Intel’s booths. But before that, I was so surprised to find out that CB was being demonstrated at the show, by AMD. They were running the Back to Dinosaur Island 2 demo and the Wright Brothers first flight experience by Zypre (https://community.amd.com/community/amd-corporate/blog/2015/06/17/the-wright-brothers-from-kitty-hawk-to-virtual-realty). What was surprising to me was that there was absolutely no line to speak of, and I eventually just straight up tried CB 3 times in a row. The first time in Dino-land, my guide asked me to let go of the handles 2–4 minutes in and I dropped. Fun experience, but then I went through the whole demo. And after that I tried the Wright brothers demo.

My overall impressions are that I like this headset better than the Vive kit, though all my observations may not be accurate because I didn’t try the headsets side by side. I felt that it had a similar, or slightly larger FOV than the Vive. Actually, more than one being “bigger” than the other, it’s more that they’re shaped differently. CB is less circular and seems to utilize the corners of the display more, while magnifying the screen a tiny bit less. The end result was that I perceived that CB had a tiny bit more FOV. The SDE was a bit less bothering on CB, but the optics were also smoother, which might explain that. CB generally felt more clear to me, in the display, which might be due to the slightly decreased magnification power of the lenses, so the perceived pixel density is higher. There were some light ray artifacts like in the Vive, except they were much smoother, and didn’t show you (to my perception) the ridges of the fresnel component, like the Vive’s lenses do. The rays look more like a blurring effect than the harsher streaks in the Vive, but I can’t say I prefer either, as I found both bothered me the same amount. Both variations are only very noticeable in scenes with very high contrast like white against a black background. I tested the tracking area and it seemed to me like the FOV of the old CB camera was around 120 degrees, and able to track at a depth of 6 feet, at the very least (didn’t try to walk more than that, as it seemed like I could bump into someone). The headset is the lightest and most comfortable one I have tried so far. It disappears much better. Audio to me seemed great, even though there was the usual show floor noise — as the bass felt quite visceral and at the right level, in particular, when I tried out Crytek’s dino demo. The head strap system was really easy and intuitive for taking the headset on and off. I can confirm it’s near close to baseball cap levels, both in comfort and ease in wearing, especially now that I’ve gotten home and tried comparing how comfortable a baseball cap really is.

As for the actual content and what I felt during them, it was nothing short of pure awesome sauce and revelation. Back to Dinosaur Island 2 has the most beautiful landscape I’ve seen and provides a great sense of scale. It might have been the best demo I’ve tried in general, as I got the closest I’ve ever been to getting scared of heights in VR, walking back to the cliff I came from and peering over it, and when for a moment I forgot there was a floor below me and my body was compelled to reach out and grab onto the rope for dear life. I had a lot of presence, relatively, for what I’m used to, and if I were to compare it to my experience on getting presence with the Vive, I’d say they provide similar amounts, but in different types of presence. With the Vive, I felt more object presence (obviously). In CB, I felt a more general sense of space and environment presence, which was already quite great on the Vive. There’s probably something, or several subtle things, I’m missing, which could explain this difference. It may be that my memory is just being terrible, and the content is affecting it as well. There’s too many ambiguous variables to make a solid evaluation. The Wright brothers demo I actually just spent testing out and investigating the hardware, not paying that much attention to the content. Though at one point, the plane would be flying towards you, and my body just naturally told me to dodge it by getting down low — and that really made me take note of what was happening in the demo. I actually got some object presence right there. Good stuff.

The Neuro experience, Back to Dinosaur Island 2 demo, and Holojam, are currently my favorite experiences here.

After all of that, there was only one other thing I tried before I rushed back home to take care of something. It was this tracking technology and peripheral package by Ximmerse. It’s similar to the PS Move indeed. It wasn’t bad for a new player in the industry, but they need to improve the precision. The demo on the DK2 they showed was this zombie killing game with a few different weapons. I could use handguns, grab clips and reload them like in the London Heist demo, use throwing knives, katanas, and an assault rifle. The technology and software honestly wasn’t very good but it was good enough that I could have a lot of fun interactions. I was surprised that my throwing knife could stick into zombies, and then I could pull them out and use them again, even if it didn’t work every time due to some oddities in the animations of the zombies. I can’t emphasize enough how much that little thing stuck with me. I also had fun duel wielding weapons. So yes, I don’t think it was even sub-millimeter tracking as far as I could tell, but it was 1:1, which seems to me much more important for such not-head tracking. I’m guessing from what I experienced, that the rate at which the data was being reported was lower than 60 Hz.

After all these demos from the first day to this one, I think I’m absolutely convinced that free, believable interaction using accurate motion controllers is the funnest thing in VR at the moment.

I’m going to use the final days to experience some new ones that weren’t at the venue yet, like Real Virtuality, and also going to try various already tried experiences, just to be sure of my subjective impressions of them, and see if I learn anything more. (and again, these last few lines were originally in my journal almost exactly as they are, except for the next one)

Already from trying some of these things more than one time, you can get things you missed, or come across new observations, something especially true and important for VR hardware.

DAY FOUR

It’s become harder for me to write about my day like this. My mind and body have been absolutely put to the VR masher over the past few days. The show runs for many hours each day after all — and then I have to spend an additional 2 hours or so on bus and foot.

This time I didn’t get to the show before opening. I actually got there 10 minutes after it started, and already there was a relatively long line formed over at the Dreamworks booth in the VR Village. Naturally, I went to try that first because the line would obviously get a lot longer. You were put on a saddle, riding on Toothless from the How to Train your Dragon movie. It wasn’t a great experience, but was fun for a short period. The lesson here is that people should learn and stop the accelerating artificial yaw movement already.

Next was Lamper. Nice little (flying) runner type, arcade-y game. Could see myself wasting time with it, but I rarely have time for that these days. Still has some of that artificial movement. I think this is another nice little piece of content people will want to play for those short periods.

Then I went over to this “cure Fred” VR thing. All that really needs to be said about that experience is that teaching kids certain stuff can be very effective using VR and clever game design. I could probably write a lot more about this and some of that other stuff I gleamed over quickly, but it’s not really anything new we’ve seen. I’m including these because it’s nice to get a feel for all the content at the show, briefly.

After that I basically just explored the rest of the exhibition, trying CB for a few more times at the AMD booth. Some newer things I noticed about those demos were that in the Wright brothers demo, there were people who talked and clapped (apparently they raised the volume after noticing it was too low for the demo yesterday), and for a second, I believed it was someone in real life doing those things, before I looked over and realized it was from VR. But at that moment, a small switch in my brain flipped, and I got fleeting presence of the people who were clapping and running beside me. It just felt like, well, people were clapping and running beside me. I haven’t felt this before in VR.

I also noticed more subtle details in the demo, like the rippling of the wings from the wind, and the incredibly believable metallic screech of the propellers getting started. A lot of care was put into this scene, but the demo had you sort of on rails with artificial movement. I was told you could control the movement and the method, but for this demo, they just wanted to get people to see everything quickly.

In Crytek’s demo, a new subtlety I noticed was that when you lean close enough for your VR hands to touch the wall or other objects, they would actually just touch them and not clip, while producing dirt particles from the movements across the cliff. It seems like a highly, and perhaps even overly polished demo. They were some good floating hands feeling up the earth.

Someone else tried the demo and accidentally found out that if you let go of the handles at a certain point, you would drop onto an edge of the cliff and survive, instead of falling straight down to your death. Then I talked a bit with the AMD representative. He said that Crytek really wanted to show that games with these types of interactive elements could still be really fun even without motion controllers, though it’s sort of assumed motion controllers would still make the experience better. I’m inclined to agree. However, the demo was still more of a passive experience most of the time, so those statements can’t be judged without a grain of salt.

As for the headset itself, I fondled it quite a bit and had gotten so used to operating it that I could really just slip it on like a baseball cap. There’s almost no difference for me now. The headphones were easy to adjust and really could disappear from the experience, completely, because you could actually lift it slightly over your ears so that they don’t even touch them. This helped aural presence a lot (and that’s possible, purely as it’s own self contained presence, a viewpoint no one else has expressed yet in the VR sphere as far as I remember). However, this might not be the intended way, or even an easy way, as the earpiece positioning system is spring loaded, so they have a sort of bounce-snap to them. Also I think I forgot to mention it feels both lighter in the hand than the Vive and Morpheus, and you can feel that it distributes weight over the straps better. The 3 straps are really easy to adjust and get comfortable for you once you get used to it. You basically separate the velcro parts, put on the headset, and then pull on the straps until the tightness feels right, locking it into place with the velcro again. Imagine if there were small pulley systems in place.

I spent some more time today just talking with other people exploring the exhibition. I got to know Bruce Wright from Disney, their VR guy, which was an awesome encounter. Also got to hear about some interesting experiences from him, and his unique perspectives on them, like Henry (specifically, he voiced that as a film maker, the sort of climax moment in the narrative was in the middle, and the ending sort of just left you wanting or expecting more; that was his only caveat). We hung around a bit and checked out some booths together. Google Cardboard’s was pretty nice — Jump content surprisingly good, and Disney may come to use Google’s services for getting their own content into VR — OK, maybe not, but there’s always a chance something will happen.

Then at some point in the day, I had an appointment with Real Virtuality. This was one of my favorite things, because I got my first really high level of social presence, and presence in general. Unlike most of the other demos with such expensive tracking solutions, this one didn’t have much wonkiness and bugs in tracking (they were using Vicon, if that matters), while also being quite low in latency. Well actually, I didn’t notice any latency, though I wasn’t looking for it. The backpack rig was also really light, easy to wear, and dare I say even well aesthetically designed with a sleek fractal pattern, although the retro-reflective dots didn’t fit in. It wasn’t completely perfect, as I had lost/bugged tracking at 2 or 3 points in the 4 minute demo, but tracking was mostly perfect all the way through for me. With that said, they were using IK and an initial t-pose calibration to get your body inside. But it wasn’t your own body; it was an avatar. The graphical realism of the scene was also not horrible, and was enough to convince my brain for a few moments that the virtual representation of my partner was the real person, at least when I didn’t concentrate hard on the models themselves. The movements, and the way you interacted with the other person on a 1:1 accurate scale was incredibly effective to inducing that social presence. In addition, you could hold a torch, which was represented as this kind of tracked pole you hold in real life. Seeing your body in VR touch a real life object that’s also in VR 1:1 gave me almost complete object presence and some body presence, in which I believe that I’m holding the actual virtual object in my hands, and that it’s not a pole, but a real torch (the flame wasn’t that realistic though, but I was able to ignore that). No other demo has given me this much complete presence. I felt like the virtual person was real and was there interacting with me, not as the real person. I felt like the objects were real and I was interacting with real ones (the box wasn’t as good at this, probably because it had some slightly different geometry versus its virtual counterpart). The only type of presence that was more lacking for me was the sense of place that you could get better in a Vive or CB/CV1. In addition, there were also negative things that broke presence, which were various unrealistic representations like the IK not exactly matching your body, or parts of your body that just weren’t tracked in the first place (your fingers, for example). Those, and especially the untracked fingers, would break the two types of presence I was talking about, but the presence comes so strongly that not having those things in your sight or not concentrating on them can bring it back very quickly. I tried playing catch with the box and it worked relatively well, but the guide stopped us because if we dropped the box and it broke the tracking, then it’d ruin everybody else’ days, and of course, I don’t want to do that, especially since I found out that Nvidia had a mind blowing CB demo that I couldn’t try because someone broke the HMD.

Before the last demo to report, one thing I have to mention and confirm is how effective the Kinect is for body or limb presence in VR. I tried a DK1, yes, a DK1 setup, with the Kinect. It gave me the most body or limb presence I’ve ever felt in any demo — much more than in Real Virtuality, and more than with Leap Motion (though I’ve yet to try Image Hands). It did have noticeable latency. It was extremely low resolution in both headset and point cloud tracking. But it was my body, and it moved and was in the same physical place as my real body. Then the demo put a virtual object, in this case, a bicycle, into the scene, and I got incredibly high object presence. The bike just felt real because my body was in the same space with it, and I had insanely high body presence. I think the thing here is that body presence influences the amount of object presence you get, at least for my brain and my physiology. I would wish for there to be an overlay feature right in the hardware of the VR system that pulls feed from the Kinect or an equivalent, so you can see your body anytime you wish, because for me at least, it would by default raise the presence I get overall in VR.

Finally, the last notable thing was the backdoor OTOY Vive demo. It didn’t really impress me that much, but it was still cool to see demonstrated, knowing what it was. In particular, the idea of having a window to another part of the world is in itself is pretty fun already. They show some pretty high fidelity renders that look nice and coherent in the complete cube map as well. I wouldn’t say it was perfect though, as I could still see stuff like compression artifacts.

From trying out all the VR things now, I’ve sort of developed this idea of 2 central themes I value currently the most in a VR experience, and that will probably evolve more with time, perhaps even tomorrow. In no particular order, number one is freedom of interaction with accurate motion controllers — to do whatever you want with them, with a variety of things in the world to interact with. Messing around with things like you’re ten is too fun. Second is a wide range of accounted for realistic behavior in interactions, as not being able to do something in VR that you would think would be possible intuitively can seriously detract from how much you believe in the world. Sony’s GDC talk gets into this nicely:

https://youtu.be/whH3eVDz95o?t=538

“ although a complex simulation can enhance immersion if everything behaves flawlessly, any element misbehaving can damage the immersion created by all the other elements so every part has to justify its presence. “

Their definition of “immersion” is different from mine (I say presence), but the quote could also apply to the definition of immersion I follow.

And of course, before all of this, you need to get presence, but that’s incredibly hard as you need a lot of things to fall into place in both the content, the hardware, and how you want to go about tracking your body, having objects tracked, and having other people tracked either in the same space as you or somewhere else. At least, it’s incredibly hard for me. For others, and indeed probably the majority of people, it’s much easier. I’m most likely an outlier when it comes to easily getting presence.

DAY FIVE

A short, final day. Not much new things I checked out this time around. The first thing I tried was this exercise demo by Virzoom where you ride on a bike and that controls a virtual ride, which could have been anything from a shark, to an ant, to a bike, or to a pegasus, the one that was demonstrated. I have to say I see the potential, even though I didn’t get presence or really great graphics. They were using a DK2. The method of control was that pedaling faster made the pegasus run faster or fly higher, and leaning left or right would turn the creature. I had more fun on this than the Dreamworks thing. I got no hints of nausea. After the experience, I voiced some points of improvement, or optional features, like modeling the VR ride to match exactly the bike’s shape and size (which means necessarily limiting your VR ride to being a bike or perhaps a hoverbike), while also doing some form of body/limb tracking, like Leap Motion, if they come out with a better tracker. This way, you may get a lot more presence while you’re on your virtual exercise. He seemed pretty serious about my feedback and also said he’d want to implement support for stuff like Leap Motion in the future. Oh, yeah, and the bikes are custom, with their own tracking, so you would probably be buying their equipment. Tracking the handles and their rotation was also done, but they found leaning was a more natural and better feeling mechanic to have your ride turn itself in VR.

After that, I had an appointment with Nvidia, so I went there and finally got to try the Smaug demo that some people at the show were hyping up, and it was indeed amazing. But before that, I found out that the CB that was broken had a ribbon cable torn inside the headset so the tracking didn’t work anymore. They replaced it with another CB, and now made sure to let the guide be strict about putting on the headset and how to behave with it.

Actually, the audio cut off hard in the Smaug demo at one point, and a restart fixed that. It was a software bug, but I was worried that I broke their last CB before I knew that.

So you’re in what looks like a 6x8 foot room, with maybe 6x6 feet of space to move around in while the PC and equipment took up some space, and it could all be tracked it seemed. The shimmering of the coins in the cave was pretty cool to see, with how the specular would change as you changed your view and as the coins flowed over each other when Smaug came. The most interesting thing about this demo though was the chest sitting next to you. It jutted out maybe to hip height, and it felt incredibly real. I got insane amounts of object, or environment presence from it by walking around and looking at it, but especially when Smaug came looking for you, and you could essentially crouch down and hide behind the chest away from his view, and peak over it to see him. I actually tried laying down prone and that worked perfectly. That bit of interaction was the real moment when the presence of the object came in. Smaug himself looked pretty cool, but ultimately my brain did not get any of his presence or the presence of the rest of the environment. With that said, if I was to chose which demo I liked better, this one, or Back to Dino Island 2, I couldn’t pick. BtDI2 had a better and more realistic looking environment than the Smaug demo, while the Smaug demo had a really fun “gimmick” with the chest being there for you to walk around and hide behind. Despite getting more of one type of presence in the Smaug demo, it just wasn’t as impressive, to me, as Crytek’s. Then again, I only experienced getting burned alive and killed by Smaug once versus the 5+ times I almost got mauled to death by pterodactyls.

Oh, and to clear up some of the suspicions of the reddit community, they said they were not running SLI, and only one of the M6000 cards in there was being used. They even showed me that the on/off light was on only for the one card.

After that I didn’t have anything else interesting to see, though I did check out Uniengine’s DK2 demo of various space stuff that amounted to 50 GB of textures. It looked pretty good and realistic, and would love to see on a better headset with better controls or some sort of narrative.

Then I went back to AMD’s booth and just hung around there till the end of the show. I did find out a few more things while there though, and I was also able to try Real Virtuality again, which was just as good as I remembered. Another thing was that while some people were going through the Wright brothers experience, and they came to the part where the people clapped and applauded, I also tried clapping. And then I asked those people if they could tell there was someone clapping in the real world, and not from the VR. No one responded that they could. I only tested 3–5 people though.

So I may or may not have new details here. First will be about LiquidVR. They said it’s releasing at the end of the year, and will have around 40+ features compared to the 10–20 that are currently being implemented. You will be able to have an “infinite” amount of cards with their multi-gpu feature as long as the hardware (motherboard, power, etc.) supports it. They have already tested 2 together successfully on both Crytek and Zypre’s demos. Support for more is still being tested.

They’ve done testing on lower end cards and at least for Crytek’s demo, only the R9 Fury X is able to maintain 90 FPS, while the 390x or 290x might be doable with the DK2. They have an equivalent of what we know as Nvidia’s MRS. They really emphasized that they want to have the experience be as plug and play as possible, even without having Oculus’ or any other headset’s SDK installed. They want to hide that all away in their software (this implies a long and hard road of supporting of course). One person who was in on this discussion suggested to AMD to make their own VR interface for directly launching into content, in other words, a VR OS, which is an interesting avenue of discussion. They’ve also received shipping confirmation on their Touch dev kits. I wonder if they have them now.

They said that Crytek’s BtDI 3 demo might demonstrate their multi-gpu feature at the next CES though I’m not sure if he was joking or not in any capacity, as the context was that someone asked him if they were demonstrating, or were going to demonstrate that feature. CV1 was also mentioned when talking about the CES in January. No implication that CV1 is going to be released then, nor that there’s even going to be a BtDI 3 demo, but who knows.

However, we should be aware of the implication that Crytek could possibly be demonstrating a multi-gpu VR demo. After all, these are the guys who, after confirmation from the AMD representative, are running the BtDI2 demo at 100% load the whole time. It would mean that they could really push the graphics, perhaps to the level we’ve expected on regular monitors, but now in VR. Crytek’s VR demos so far have truly been quite beautiful, both in graphics, and in art style, so it will be exciting to see what sort of potentials can be unlocked when they fully take advantage of multi-gpu as well as other LiquidVR features.

And that’s all. I had fun talking with a bunch of people for the rest of the day and also pulling in some people to try CB who didn’t know it was even there. Because of selfish reasons or not, I am biased into thinking AMD had the best demo booth. With unashamed honesty, word by mouth demos work the best when it’s you who gets the word. I mean, I got to try CB around 10 times or more, and had fun each and every time. Thanks AMD.

So I made some conclusions before, about stuff like the importance of interaction, motion controllers, etc, and I’m sticking by them. A new conclusion I’m coming towards though is that it is just easier to achieve immersion, or better yet, presence, in CB than the DK2 for me, by default, without having to rely on other things like Kinect or tracked objects. I could swear that if I had played the Wright Brothers experience on the DK2, I wouldn’t have been as eager to duck the plane flying towards me, nor would I have felt a pang of fear for heights at the cliff side in BtDI2, nor the insane chest presence while hiding from Smaug. Something about the display, the optics, the tracking, the comfort, the audio, and everything else coming together really did it for me. I find that sometimes a few of these things might not be immediately noticeable when you try a headset, and surface only as you get more experience with it. That is yet another reason I’m keeping my judgement of the Vive and other VR hardware relatively low and have my impressions laid out very explicitly talking about subjective experiences. So of course, any preferences for something I’ve voiced til this point can still be overridden.

With that, I finally end the crazy adventure I went on for 5 days. Thank you to everyone who made this possible. And one last thing…

You can probably tell which piece of swag is my favorite. It’s a custom piece.

And some fondling.