[embedyt] http://www.youtube.com/watch?v=DblhZfc7GQw[/embedyt]

STORYTELLING IN VR:

The scene above is from the upcoming VR film “Dirrogate:DeepVR”. A pseudo-script excerpt might read somewhat like this:

Ext. Dan’s Apt, Terrace, Night.

(ambiance: city sounds, a police siren wails in the distance)

We look at Maya through Dan’s perspective, sitting on one of the sun loungers on the terrace.

Maya

(stifling a yawn, stands up)

“It’s getting late… Let’s go inside?”

She stands up, but (we) Dan, does not… And there’s not much the Director can do about it, thus ruining his/her envisioned scene in this Narrative VR film. The culprit: Positional Tracking.

Positional Tracking: or “PosiTracking” and immersive VR filmmaking:

The sheer immersive nature of having an Imax like screen strapped to the audience’s faces has filmmakers salivating at the prospect of using such a vast canvas to ‘paint’ their story on. As a medium that is in it’s infancy, there is a temptation to get out there into the field with 4 or 6 cameras and capture the whole 360 ‘stage’ while framing action dead center and calling it a VR film.

It’s only after one experiences *spatial depth* along with the ability to look around, and with 6 degrees of freedom (rotation and position) does one come to appreciate the magnitude of what is possible in VR filmmaking. As an example in the scene above, if the book was replaced with a glass of sparking wine, giving the audience the ability to choose if they want to lean in and see bubbles pop at the surface of the liquid, or admire the gloss on the cover of a paperback – it makes all the difference to lending those subtle cues we take for granted in real life. Those details are what contribute to a sense of “presence” that both film-maker and audience alike, crave.

PosiTracking can, arguably be called – a cornerstone for defining interactive VR experiences.

But… we’re getting ahead of the plot. Let’s backtrack. Take a look at the image above and the yellow circled part that reads; Tracking Orgin: Eye Level.

There are two options in the Game Engine, Unity, which is turning out to be a popular platform to create Cinematic VR films. The other is UnrealEngine 4. There are two choices (in Unity) for basing positional tracking: Eye level and Ground Level.

If choosing Eye level, everything looks perfect and as expected. Wearing the headset- and having head-hopped into Dan’s head, we the audience go from being a fly on the wall to First Person POV – something that is surprisingly quite natural in a VR film but could scream “amateur” if done in traditional narrative cinema.

All will be fine until….

Maya stands up. Then, we’ll find ourselves at the wrong height to look at her face as she speaks. The Director of course, wishes that we stand up. But this is a film…a movie – and traditionally, movies were/are designed to be passive entertainment driven by a linear thread. At maximum, many people would welcome the ability to turn left and admire bubbles in a wine glass, but asking someone to stand up will be a bit of a stretch. Compounded to that, look at the yellow circled area; Use Profile Data.

When people get their VR headsets (usually at this point an Oculus Rift or HTC Vive) they have a camera that is placed at some distance in front or around them in the room which has a onetime calibration that ‘locates’ the user in 3D space. This allows the VR world to know if the user/player/audience is sitting or standing and the position of their head in 3D space.

This “user profile Data” is unique to each person using the headset and is stored in a file. The point being – a 6foot person will see the VR world from a different perspective than someone of a different height. So if a 6 foot person were to stand, they would tower over Maya and this angle is NOT what the Director might have in mind while ‘framing’ the scene (even though there are arguments that there’s no such thing as framing for VR)

So,

Scenario 1: Maya stands up, but the audience does not – thereby putting them at the wrong height for the remainder of the scene.

Scenario 2: Maya stands up, and so does the audience – but the user is 6 feet tall, thereby putting them at the wrong angle again.

Scenario 3: The Director disables Use Profile Data and instead elects to place the camera manually, at the intended level of the final part of the scene, i.e A little above the eyeline of Maya when she stands. – This way if the movie begins with the audience sitting, we are a little above her, looking down at her, but when she stands we are at the correct desired framing.

All is good in this case – unless – The audience (we) decide to stand up. Then we really tower over her, thanks to positional tracking still kicking in but now with the additional camera height offset.

If we decide to use “floor level” tracking origin, it is supposed to put us at nearly the correct height as if we were sitting on the other sun lounger. I say ‘supposed’ because it never really is with that pinpoint accuracy thanks to other settings that come into play such as player collider skin width, slight discrepancies in model scale etc. – All terminology deserving of their own separate articles.

Again, even if we got all variables perfectly to coincide with our real world position, there is no guarantee that the audience will stand when she does, thus breaking the scene again.

Say Ahhhhhh! – The bane of Positracking:

There is another highly undesirable side effect of using Positional Traking – The Patrick Swayze or Ghost Effect. Oculus Studios has a different, but equally valid definition for it.

Due to the less than optimal (in my opinion) way that positional tracking is implemented with the Oculus Rift and Unity (UE4 too suffers this) game engine, an audience can literally, lean into and see the insides of geometry. In the case of Maya, upclose, the audience can lean and see her tonsils. There is interior geometry for the mouth because she speaks and even at a medium closeup, it would look unnatural to remove mouth geometry. Other areas where this un-desirable effect crops up is being able to stick one’s head into walls and furniture.

There are crude workarounds suggested – such as fading the field of view to black when a collision is detected between the user (the camera) and geometry in the VR world, but I find it a less than optimal suggestion.

So…

What’s a VR filmmaker who want’s to go beyond shooting flat 360 video and calling it VR, or indeed, even shooting a stereoscopic 360 movie to do? Once audiences experience what’s possible with a true Hybrid VR film, there is every reason to believe Directors will have a hard time convincing themselves to fall back on vanilla 360 video when creating narrative VR films.

[embedyt] http://www.youtube.com/watch?v=Usaq0ovH-hs[/embedyt]

Hybrid VR films and Positional Tracking:

There is no denying it, filmmaking is evolving at a rapid pace. The VR cinematographer, filmmaker and Director will soon have to learn the tools and techniques of the trade, to appeal to mass audiences or manage their art just as celluloid film might currently co-exist with DCPs.

Already, FILMENGINE is encouraging VR filmmakers to register interest. FilmEngine is certainly set to make in-roads in this space.

Positional tracking will equally be important when shooting with cameras such as the Lytro Cinema 755 megapixel camera above. The way it might be used is to capture the subtle and fine nuances of human actors and greenscreen them into a CG world or a whole real life scene could be captured too, merging it with photogrammertry or lidar produced digital film sets

There is an option to disable Positional Tracking, currently with a caveat – Oculus hasn’t yet responded to my query of why the scene goes monoscopic when disabling the switch, which they provided in the Unity software. For Dirrogate:DeepVR, I probably will end up disabling positional tracking while of-course retaining rotational tracking. It seems a small price to pay for not letting people kneel down and admire the fine textures of Hazelwood Terrace, but yet has enough ‘deep vr’ at this stage in the evolution of cinematic VR.

Any VR filmmakers out there? What are your thoughts.

* During discussions with Oculus via their forums, I’ve pointed out the less than optimal implementation of their “Use Positional Tracking” toggle method, and while they’ve initially said it was “intentional”… over the course of the discussion they’ve agreed, and a new implementation will be out by mid-July 2016.

Unity developers are cautioned: Simply unchecking the “Use Positional Tracking” toggle will result in games such as circle shooters having Reversed (pseudo) stereo in the rear 180 on final builds.

Cinematic storytellers hoping to place the camera at an exact location, by disabling positional tracking via the switch, will have no success. Custom code will be needed to accomplish this, for now.

**Update**

As of Aug 24th, 2016, this issue can be marked as solved, using Unity 5.3.6p1 and Oculus utilities 1.7.0. I would like to thank staff at Oculus and their dev forum (@vrdaveb for fixing this issue, and @cybereality). In upcoming “Think in 360” workshops and masterclasses, I will be showing Cinematographers how positional tracking can selectively be disabled for CUs, and re-enabled when the scene is over.

I believe the fix has not yet been rolled out to Unity 5.4, yet.