By now, you’ve probably heard that 2016 has been dubbed “The Year of VR.”

So when Google News Lab and YouTube offered our video team here at The Atlantic the chance to experiment with GoPro’s new Odyssey 360 camera and Google’s Jump Assembler, we eagerly accepted.

The special occasion? The 2016 Republican and Democratic National Conventions, a once-in-four-years opportunity to capture two significant moments in the U.S. presidential election in 360 VR.

As it turns out, having a stationary, all-seeing camera is a surprisingly liberating experience in the field and opens up possibilities with immersive journalism that we’ll continue to explore as the technology evolves.

Here’s the result … and how we did it:

THE SETUP

The GoPro Odyssey, originally announced late last year, is basically a supercharged version of the custom-built 360 rigs that have popped up over the years. It houses 16 off-the-shelf GoPro Hero4 Black cameras arranged in a flat, circular array with a solid metal base that doubles as a heatsink. A removable plastic cover helps protect the guts of the rig, but as the Google engineers warned us — it isn’t waterproof.

“Tell me, Muse, of that camera of many angles, who wandered far and wide, after capturing the holy citadels of Cleveland and Philadelphia.” — Homer

Each camera is connected to its neighbor by a rear orange housing with two ethernet terminals, one for input and one for output. The first camera in the array (marked by an arrow indicator) has a special terminator and controls power, recording, and card formatting for the entire array.

Each GoPro is positioned sideways and set to record in 2.7K, 4:3 wide mode. This allows the rig to capture as wide a vertical field of view as possible (around 120 degrees), leaving only two small blind spots directly above and below the camera. The color profile on all cameras is set to flat in order to preserve dynamic range for flexibility in post-production.

Who ya gonna call?

The individual GoPros do not have batteries installed due to limited capacity and heat dispersion issues. Power is instead supplied externally through a 4-pin XLR connector on the bottom of the Odyssey. Google supplied us with a meaty multi-battery pack made by Switronix that resembled a Ghostbusters Ghost Trap, but any source that provides sufficient power will work. Although the rig was technically mobile, I’d use that term very loosely considering the whole ensemble weighed at least 35 pounds.

The Odyssey has a standard tripod mount hole on its base, so it can be mounted on pretty much anything that can support its weight (about 15 pounds). Google provided a lightweight MeFoto tripod that, when fully extended, reached the ideal height for a convincing “like you’re actually there” effect in the actual video.

Since there isn’t much to adjust, operating the camera is simply a matter of connecting the power cable, turning everything on, and pressing record.

Sixteen blinking, red LEDs let you, passersby, and the rest of the Cleveland P.D. know that you’re recording. Nothing to see here, folks.

Nope, not conspicuous at all.

Sixteen cameras means 16 microSD cards, each capturing a necessary slice of the 360-degree image. Due to the relatively slow USB 2.0 controller on the GoPro, Google provided a 12-slot, Thunderbolt-equipped microSD card reader to externally copy the files for uploading. Dealing with the pile of cards is cumbersome, but I’m pretty sure that future iterations of the design will simplify this process considerably.

16 angles, 16 cards, 16 cases. Gotta catch ’em all!

One upside to having 16 identical, off-the-shelf cameras is that if one breaks, a $400 trip to Best Buy will hopefully make things right.

The downside, of course, is that there are 16 cameras that can stop working at any time. I wish we had a spare on-hand, in fact, when we had a problem toward the end of our two-week experiment that rendered the rig inoperable. When you’re experimenting with new technology, you have to expect a few snags to arise.

IN THE FIELD

Coming from a traditional video production background, I discovered that shooting in 360 requires a bit of a mental paradigm shift.

First and foremost, there is no “off camera.” While shooting, there are two options: Hide carefully, or appear in the video. Reluctant to stray from $20,000 worth of experimental gear, I usually tucked myself underneath the tripod in the camera’s blind spot.

You can’t trump this vantage point.

Secondly, I sought out situations in which the camera could be placed in the center of action: in between protesters and police, in the middle of a crowded park, in the passenger seat of a pedicab. Needing to be in the center of everything, however, often makes the camera itself the center of attention. One major recurring challenge was finding a moment to actually film in between being asked, “just what is that thing?”

Without a way to preview the final image, shooting with the Odyssey can be a gamble. Through trial and error, I arrived at a few rough guidelines:

Place the camera so that the primary action occurs between roughly 3 and 15 feet away. Any farther, and the 360 experience feels distant and underwhelming. Any closer, and the distortion and stitching artifacts become increasingly noticeable. Consider how your shot will look from all angles, since you won’t have control over where your audience is looking (except for the beginning). Keep the rig level and stable. Unsteady motion can be at best disorienting and, at worst, nauseating. When possible, shoot conservatively in shorter bursts (< 2 minutes). This saves precious time in uploading, stitching, and editing. Plan ahead, since moving the rig is not particularly easy.

Limitations aside, there is a certain magic — increasingly absent in the age of Insta-everything — in having to just trust your instinct and hope for the best.

Ready for your close-up, Senator Warren.

I found myself thinking a lot more about how to use perspective and action to create a scene rather than pulling focus, adjusting ISO, worrying about lighting, or framing my shot through a viewfinder. With my eyes and hands freed up, I struck up random conversations with bystanders — some of whom became characters in the scene. That would have been impossible while holding a camera and focusing on the perfect shot.

POST-PRODUCTION PART ONE: STITCHING

Each of the 16 cameras in the Odyssey array only captures a small slice of the 360 panorama. Stitching is the process of blending together the individual slices, making a seamless visual field that can be viewed as if it were captured as one piece. Typically performed manually, this process is usually the most time-consuming and frustrating process of the workflow.

Google’s attempt to smooth that out is called Jump Assembler, an automated, algorithm-based cloud service that converts the files from the Odyssey into stitched, ready-to-edit footage. We were initially advised that the process might take up to a few days, but in most cases our footage was returned to us within about 6 hours. Done manually, this would be dozens, if not hundreds, of hours of eye-searing labor behind a keyboard and mouse.

Using 32GB microSD cards, I was able to record roughly an hour of footage at a time. The resulting files — all 16 angles — are tagged with metadata to identify to Jump which angle they represent in the array. Following a shoot, I removed the cards carefully, copied them to my laptop’s hard drive using the external card reader and then began the uploading process through a desktop app called Jump Manager.

The hypnotic dance of 12 memory cards transferring simultaneously

A minute-long package of files runs a hefty 6GB (or roughly 350GB/hour). At the conventions, Google made available a gigabit ethernet connection that drastically reduced upload time. With a slower connection, it could have taken days.

The stitched h264 .mp4 file created by Jump is more manageable in size and is ready to import into an editing suite such as Adobe Premiere. Normally, Jump delivers two versions: a full-res 8k 30fps file and a 2.7k file, the latter meant to be used as an editing proxy. For this special project, we only received 4K/2160p versions.

Less sophisticated setups typically introduce noticeable artifacts such as distortion and ghosting when a person or object crosses over the visual “seams” of a stitched frame. The stitching results from Jump, however, were excellent. The seams between the angles are virtually undetectable, and the image quality throughout the visual field is clear and consistent, even with notoriously complex surfaces like clouds or water:

I challenge you to find the seams.

The process isn’t perfect. Some details and patterns, such as rows or columns of straight lines, occasionally generated glitches in the final image:

Notice the curtain above her head as she moves.

It also expectedly struggled with objects that came too close (within about 2 feet), creating a disorienting funhouse mirror effect:

Ghosting/distortion from an object (my head) being too close

Unfortunately, because Jump is automated, there is no way to manually adjust the actual stitching. That being said, the algorithm is constantly being improved, and I’ll gladly trade the occasional wrinkle for the unmatched convenience and speed.

POST-PRODUCTION PART TWO: EDITING

The latest release of Adobe Premiere CC includes a set of features that simplifies editing 360 footage. After importing a stitched .mp4 file into a sequence, Premiere’s default view is a flat layout reminiscent of Earth in the Mercator projection. While distortion is evident toward the edges of the frame, you get a helpful glimpse of everything happening in your 360 frame at once. Because the footage generated by the GoPro Odyssey is stereoscopic (i.e. 3-D), two identical, offset frames are stacked — one for each eye. With the proper headset (Google Cardboard, Oculus Rift, etc), this creates a sense of depth that adds demonstrably to the immersive effect.

The other viewing mode — appropriately called “VR mode” — renders an interactive preview of the final 360 experience, allowing you to pan in real-time with the mouse. I switched between the two view modes frequently, creating edits in the “flat” mode and previewing the results in VR mode.

Left: Standard (flat) editing mode; right: VR Mode — Adobe Premiere CC

Although the GoPros are set to capture at 2.7K using the flat color profile, the GoPros are small-sensor action cameras and quickly reveal their limitations in pretty much anything but daylight. Nighttime shots in Cleveland, for instance, were noticeably grainy and made even worse by applying basic color correction.

Transitions should be used lightly and sparingly, since the brain takes a bit longer to “adjust” to a new shot than with normal video. Cross-dissolves are the safest, most commonly-used option for shifting between time or place.

As for audio, there have been some impressive advances in spatial technology to create a 360 audio “image.” Zoom’s popular H2N microphone recorder (also included in Google’s kit) has a setting for capturing spatial audio, but since the Zoom doesn’t directly integrate into the Odyssey, I didn’t really have time to experiment with it. I relied mostly on the surprisingly decent built-in audio picked up by the GoPros. For a backstage press conference with Senator Elizabeth Warren, I used my iPhone to get a higher quality recording and synced with the GoPro audio track.

Exporting in Premiere is straightforward: The newest release even has an option to insert the required metadata so that YouTube and Facebook can identify your exported video as 360 and display it correctly.

THE FUTURE OF VR

Many major publishers are experimenting with this nascent medium, producing immersive video experiences that are reshaping digital storytelling. The New York Times — whose documentary film The Displaced was one of last year’s most talked-about 360 films — has mailed out over a million Google Cardboard headsets to its print subscribers in a big bet on the promise of journalism in 360 degrees.

There is undeniably a wow factor when exploring something in 360 for the first time, especially on a mobile device and even more so on a headset. Whether transporting you to a political protest in Cleveland, a war zone in the Sudan, or the International Space Station, no other format available today comes close to being as mentally engrossing.

VR as a medium for news and documentary is still in its infancy. A few extraordinary examples have come along, such as The Verge’s Michelle Obama 360 or The New York Times’ “Seeking Pluto’s Frigid Heart.” In ceding some of the common visual storytelling devices in film — focus, framing, etc. — creators must creatively embrace elements like narration/voiceover, personality, pacing, and graphics to help guide the viewer through an experience. Jessica Brillhart, Google’s official VR filmmaker-in-residence, has shared some insights on the use of attention and storytelling in VR:

The viewer is ultimately the storyteller in this medium, and that all a creator can hope to achieve is constructing the best kind of experiential world for that person.

On a side note: Some diehard VR advocates will be quick to point out that 360 video is not technically virtual reality since it isn’t a “virtual” or interactive world. As the field of VR content experiences rapidly expands, often mixing “real” and virtual elements, I would argue the distinction has become moot. For many consumers, 360 videos in a Facebook News Feed or a playlist on YouTube are as valid an entry point into this new frontier as owning an Oculus Rift or HTC Vive. As both hardware and software in the VR space become more sophisticated, we’ll only continue to see more crossover between virtual and real worlds.