Cinematic VR – Film-making:

As the pace picks up on production of “MAYA” – the stereoscopic 360 VR motion comic, insights learned along the way are being shared on the Real Vision Knowledgebase. VR made a splash at the Sundance film festival, resulting in an increase in visits from Hollywood studios going by visitor logs to the site.

This article will focus on some aspects of post-production, though as we will see, Production and Post, now more than ever, is becoming intertwined in Cinematic VR film production. While “MAYA” is a motion comic, and not a full fledged narrative VR film, many similarities exist. The motion comic will have a mix of:

Stereoscopic 360 VR scenes – (the difference is like night and day when a 360 scene is seen in Stereoscopic 3D after viewing it in plain 2D)

Compositing of conventional Stereoscopic Video footage within the S3D 360 sphere – (in select scenes)

Sky replacement for moving clouds / atmospheric effects.

Time lapse s3D 360 shots.

Narrative, first person driven – (Meathook Avatar)

Voice acting and Sound score.

The first few location shots have come in for evaluating and post processing – and to put it simply – Cinematic VR 3D-360 production, is equal parts art and equal part science. There is and will be lots of hand tweaking to footage from corner pinning, to mesh warping and more. If Stereoscopic 3D production was no joy – most Directors, Cinematographers and post production houses will shy away from the medium, at least for the first couple of years while hard ware and software manufacturers catch up.

So what’s missing?

None of the NLE (video editing) software to date has tools and workflow for handling 360 video in a straightforward manner – (there’s no real time preview in 360 for instance)

There are no plugins for compositing in Stereo 3D in 360. – Custom macros, involving some degree of math is needed to write nodes and routines for Nuke and After effects.

Cameras – the current state-of-the-art is the Go-pro in a 3D printed “rig”. There are of course heavy duty Red-Epics in a radial config, but this quickly gets cumbersome when shooting a narrative VR film – in S3D. Cinematographers are barely recovering from the backache of conventional beam-splitter rigs.

Forget “jello cam” (Cmos shutter artifacts) – Stitching artefacts from multicamera rigs are several degrees (no pun) more painful – leading even multi-million dollar funded Hollywood Cinematic VR companies to “ground” their rigs on follow up productions, after less than spectacular first attempts at Cinematic VR.

We’ll (virtually) fix it in Post!

Any one who’s worked even a single day on-location would have heard those in-famous words uttered. In Stereoscopic 360 VR production, everything will *need* to be fixed in post. Currently there is no other way. The weeks spent after the first few shots came in from MAYA was in figuring out a workflow to compensate for everything from parallax errors, sync issues and alignment. – Though everything still is a work in progress, much progress has been made as we go along.

Mesh warping, and Corner pinning will be the order of the day when it comes to Stereoscopic 360 footage treatment. Studios are well advised to hire as many Nuke and After Effects artists and lock them down with contracts before the flood gates of VR cinema open.

Designing the Perfect S3D 360 Camera:

While I’m not pleased with a couple of short sighted employees I dealt with in the marketing dept. (who much rather I bought two cameras than engage further in the free knowledge I shared with them) – there is no denying the fact that only the RICOH THETA has got the design right in creating an all-in-one 360 camera. The latest iteration does video too. These cameras have lenses on the front and back to cover the whole 360 x 180 sphere of view, but more importantly their profile is slim enough to stack side by side yielding a less than human interocular distance – perfect for stereoscopic 360 capture in one shot – well the most expensive part is the “trigger”: two android phones or two iphones.

Is the camera perfect? In very early discussions I had initiated dialog with employees in Japan and the US…

To tell them to include a way to trigger multiple thetas via one wireless device – (their app runs on android and ioS)

There is no way to truly genlock sync the cameras, but if they connected “to” a wireless gateway instead of putting out their own SSID, the previous point would have been possible

The camera’s resolution is not what one would use for “Cinematic VR” production – (MAYA is a motion comic and captured frames will be run through a comic book filter anyway)

The Camera is perfect as a “meat hook Cam” rig, allowing for production of s3D 360 footage ratings of G, PG 13, R and higher.

So what should the ideal S3D 360 camera encompass?

A Ricoh Theta like profile with a much higher resolution CCD sensor (or global CMOS sensor to avoid rolling shutter problems) Scan-line level sync capability. 180 fish eye lens – Yes I’m a fan of capturing two spheres and inverting the back facing one (as long as parallax isn’t large or can be dealt with)

Point 3 above needs expanding on – A lot of people think that because of the nature of extreme fish eye lenses, there will be loss of stereo in the periphery. They are right of course, but as we’re discussing “narrative” and Cinematic VR film-making, it will be very rare to have subjects of interest walk right off the field of view to the extremities while the camera is locked down on all axis. Such a “Graduated stereo fall-off” could be a narrative storytelling device, used by the Director.

More likely in narrative Cinematic VR filmmaking, the Director and Cinematographer want control of the “frame” so the stereo sweetspot (well 120degrees stereo is hardly a “spot” it’s a pool) will keep moving as the camera follows the action or subject/object of interest.

Unless you’re making 12 angry men in VR, it makes sense to ditch the whole multicam rig approach with the stitch artifacts – and explore what a wide/ultrawide parallel stereo rig can do.Having to stitch only two views has a bonus – No ugly stitching artifacts if the main field-of-interest while the camera moves – That alone is worth the “no stereo at the extremities” argument.

A bigger concern will be parallax compensation in such a stereoscopic rig and that of erasing a “bulbous fish eye lens” when seen by the other camera. This is an issue that needs to be addressed, in designing the perfect s3D 360 Camera Rig.

In MAYA, with a little work, the other theta’s lens is cleaned up in Photoshop. Using Gimp’s “resynthesize” it was relatively easy to reconstruct the scene thanks to the stereo images loaded as layers.

(click the image for full sized version – suitable for Oculus DK1, Dk2, and Gear VR – gear vr needs encoding as a movie, as there is no support for stereoscopic 3D still images – though Carmack’s working on it!)

Observations on the production WIP still image:

The output image above is after running a wavelet de-noiser filter on the Ricoh Theta image. Red Giant’s “Looks” filter for After effects was also used as part of the scene mood.

The girl is a stock render in stereo, from Poser Pro (I am no poser expert yet, hence the rough garment fit, and style.)

Retinal Rivalry (there are several types of retinal rivaly in stereoscopic 3D productions) has been reduced by selective desaturation of areas with severe highlight mismatch: The row of lights on the building windows closest to the rail of the balcony, the patch of green lights between the building and the girl’s hair…and more.

This is still a work in progress so stereo mismatch has not been fully addressed – it will be tweaked to higher levels for comfort.

so stereo mismatch has not been fully addressed – it will be tweaked to higher levels for comfort. The girl conveniently covers the parallax error between fish eye lenses.

The nadir has been covered with a black mask to hide the monopod, – If big name Hollywood VR film studios can get away with severely cropping their nadir – we are allowed a small black disk.

Both, a corner pin and a mesh warp is used to correct stereo alignment – in this still image it’s not gone through a second quality check pass, yet.

There will be retinal rivaly if trying to “fuse” areas of extreme parallax – for instance looking at the girl and then trying to resolve the “greenish” area behind her head. Or, looking at something in the far distance, then trying to resolve her hand/elbow near the door.

Scale – of the girl. Getting proper scale in a 360 VR movie is not trivial – A recommendation is to have some real world reference to fall back on when designing content. In this case the door handle and the girl’s hand was used as an estimate for scale.

Depth budget is important – very important when planning for post.

There can be no H.I.T (horizontal image translation) to move the depth volume – as done in conventional stereoscopic 3D movies. Any translation on the images X axis will lead to a seam showing (though I have not yet experimented with any kind of “offset – wraparound” which may/maynot make it possible to apply HIT)

There is a translucent “plane” visible near one corner of the balcony – this is an artifact of using the vignette filter in Magic bullet’s Look. (it darkens the image border)

For comparison, here is the non-denoised, and non retinal rivalry free version of the scene. There is one area I can’t mention due to an NDA, – that of compositing in s3D in 360.

If streoscopic 360 production for Cinema is the motivation, “Think in 3D” provides an introduction to the art of stereoscopic 3D filmmaking and is relevant when evolving into this new medium of visual storytelling.