About a year and a half ago, I hurriedly wrote this note to a few members of the Looking Glass team: “I’d like to make that volumetric or light field recording rig we’ve been talking about for a while. I want this for a personal reason and it is time sensitive. My brother Ryan is dying of pancreatic cancer and I want him to be able to leave a message to his newborn daughter that feels more real than anything anyone has ever recorded before.”

Unfortunately it turns out pancreatic cancer is one of the only forces faster than technological progress and we didn’t manage to pull this off before my brother passed away.

But a lot can change in a year and a half. Due to the breakneck pace of development of both capture and playback technologies in that time, 2020 will be the first year that volumetric and light field recording technologies deeply integrate with holographic displays like the Looking Glass.

This combination of capture and display will lay the foundation for the next century of how we relive our most cherished memories and, soon, how we most deeply communicate with one other around the world.

This is a brief guide to the capabilities and limitations of each of these real-world capture technologies, how they pair with holographic displays like the Looking Glass, and how both businesses and individuals can apply these technologies today.

Approaches

There are four main approaches to high-fidelity, three-dimensional capture of the real world that are suitable for playback in light field aka holographic displays, each of which has its strengths and weaknesses. The four approaches I’ll cover here are large-scale volumetric video capture studios, portable/modular depth camera capture, light field capture, and depth capture from phones.

1. Large-scale Volumetric Capture Studios

There are a few dozen volumetric video capture studios operating around the world that work by pointing dozens to hundreds of cameras at a group of people on vast stages. The video feeds from those cameras are then sent to server banks where the imagery is stitched and transformed into three-dimensional meshes (e.g. 3D models) of the captured scenes.

A significant benefit of this type of capture is that resulting volumetric video is cross-platform, playable in AR or VR headsets as well as holographic displays via Unity or Unreal plugins. This also means the content can be modified with visual effects and added to synthetic environments with almost the same ease that a CG artist would find working with virtual characters.

There are—surprise—downsides to this approach however. Capturing and processing the data at these large studios is costly from a time and financial standpoint, running between thousands to tens of thousands of dollars per minute of processed volumetric video depending on the quality and scale. Also, volumetric video capture as a class does not handle fine details like hair and optical effects like specularity and refraction well, if at all. This can have the tendency of making imagery of people captured in this way sometimes feel a bit more like video game characters rather than real people.

That being said, volumetric video from these studios is one of the most prevalent types of “holographic” content in the world right now and, due to its ability to run cross platform, continues to expand its reach.

2. Portable/Modular Depth Camera Capture

Just as 2D film first started in large studios and then moved into the hands of vastly more artists as video cameras became portable and more affordable, such is the path being cut with volumetric video capture.

Relatively low-cost and capable depth cameras like the new Azure Kinect are being combined with powerful software toolsets like Depthkit to open up volumetric filmmaking to a wide range of developers and artists, with a resulting quality that approaches and sometimes exceeds that of the large volumetric video studios.

This nimble and modular approach to volumetric video capture can be scaled up or down for productions of different size or duration and is significantly more flexible than booking a large-scale studio. Because of these advantages, we use this approach often in our own Looking Glass labs.

That being said, the more nimble volumetric video capture approach is still subject to the same fundamental fidelity limitations as discussed with the large volumetric video capture studios: volumetric video as a class can’t capture details like hair and misses out on key optical effects like specularity. This limitation is balanced by the relative affordability of the cameras and software toolsets, their portability, and the flexibility from a visual effects standpoint of the resulting captures.

3. Light Field Capture

The highest fidelity way to capture the real world in practice today is light field capture, full stop. It’s also the least well known and most difficult to find off-the-shelf tools for.

That’s because light field displays, more colloquially referred to as holographic displays, just hit the market at scale about a year ago. Only with their arrival did this new workflow of light field capture direct to light field display emerge. I believe this is a critical step to achieving the perfect reproduction of life in holographic form.

That’s because the light-field-in-light-field-out approach is fundamentally different from volumetric video capture. In the case of light field capture, the intensity, colour, and directionality of rays of light from a real-world scene are captured and then replayed, completely bypassing the middle step of conversion to a 3D mesh required for volumetric video. This results in a fidelity level that simply isn’t achievable with volumetric video capture.

Light field captures can be static or recorded and played back at 30fps+ for light field video. Looking Glass cofounder and CTO Alex Hornstein has been experimenting extensively in this area over the past couple years, most recently on expeditions with National Geographic and other explorers looking to capture nearly inaccessible places and endangered species around the globe, the goal being to bring these places and species to a wider audience with unparalleled realism.

As mentioned above, light field capture isn’t nearly as widespread as volumetric video capture yet—but that’s rapidly changing. Light field capture rigs have started to spring up at conferences around the world, startups are building new systems every day, and many of the big tech companies are getting into the game as well: Google put together a major initiative exploring light field capture tech last year.

4. Depth from Phones

If you’ve read the above and are feeling left out of the paradigm shift from flat media to a holographic future, this last section is intended to deal you back in. Because believe it or not, you probably have a depth camera with you right now. Millions of people do.

Many modern phones like the iPhone 7 Plus, 8 Plus, X, and 11 and a number of Android phones like the Pixel 4 have depth cameras built in—primarily marketed for their ability to take Portrait Mode photos with depth-of-field effects but giving them the latent ability to capture real-world depth suitable for holographic displays.

Depth capture with phones is instantaneous, convenient, and pretty damn fun. It’s not going to get you the same level of quality as the other three approaches described above, and fundamentally it is a single-camera approach that will result in gaps in the resulting holographic reproduction — but it’s surprisingly good and points to a future not far away in which all of our memories are captured and played back with real depth.

So there you have it. The real world is coming to a holographic display near you. Not 100 years from now, not 10 years from now. Next year — 2020 — is the year the long-awaited sci-fi dream of recording and playing back holographic memories becomes real.

Shawn Frayne is co-founder and CEO of Looking Glass Factory. A longer version of this article, dedicated to the author’s brother Ryan, with download links for toolsets and details of how to apply the capture methods described to Looking Glass display technology, can be found here: The Memory Machines