Here is an update on my quest to stay on top of all things “holo:” HoloLamp and RealView “Live Holography.” While the two have really nothing to do with each other, both claim the “holo” label with varying degrees of legitimacy, and happened to pop up recently.

HoloLamp

At its core, HoloLamp is a projection mapping system somewhat similar to the AR Sandbox, i.e., a combination of a set of cameras scanning a projection surface and a viewer’s face, and a projector drawing a perspective-correct image, from the viewer’s point of view, onto said projection surface. The point of HoloLamp is to project images of virtual 3D objects onto arbitrary surfaces, to achieve effects like the Millenium Falcon’s holographic chess board in Star Wars: A New Hope. Let’s see how it works, and how it falls short of this goal.

Creating convincing virtual three-dimensional objects via projection is a core technology of virtual reality, specifically the technology that is driving CAVEs and other screen-based VR displays. To create this illusion, a display system needs to know two things: the exact position of the projection surface in 3D space, and the position of the viewer’s eyes in the same 3D space. Together, these two provide just the information needed to set up the correct perspective projection. In CAVEs et al., the position of the screen(s) is fixed and precisely measured during installation, and the viewer’s eye positions are provided via real-time head tracking.

As one goal of HoloLamp is portability, it cannot rely on pre-installation and manual calibration. Instead, HoloLamp scans and creates a 3D model of the projection surface when turned on (or asked to do so, I guess). It does this by projecting a sequence of patterns, and observing the perspective distortion of those patterns with a camera looking in the projection direction. This is a solid and well-known technology called structured-light 3D scanning, and can be seen in action at the beginning of this HoloLamp video clip. To extract eye positions, HoloLamp uses an additional set of cameras looking upwards to identify and track the viewer’s face, probably using off-the-shelf face tracking algorithms such as the Viola-Jones filter. Based on that, the software can project 3D objects using one or more projection matrices, depending on whether the projection surface is planar or not. The result looks very convincing when shot through a regular video camera:

HoloLamp’s primary problem is that the result only looks convincing when shot through a regular video camera — the root cause being that its projection is monoscopic, i.e., from a single viewpoint. Unlike video cameras, (most) humans have two eyes and a keen sense of stereopsis. When viewed with two naked eyes, HoloLamp’s illusion will fall apart. All monocular depth cues (primarily perspective and motion parallax) will tell the viewer that there are 3D objects floating above the projection surface, but the stereoscopic depth cues (vergence and binocular parallax) will give away that the viewer is still looking at a flat image. This is not just theory — I have done this experiment in the CAVE many times, usually to win arguments about the importance of stereoscopy. With stereo, there are highly convincing 3D objects, one moment later after turning stereo off, there is nothing.

It’s somewhat worse than looking at a standard non-head tracked 2D projection, because the monoscopic and stereoscopic depth cues are actively fighting each other, confusing the viewer in the process. The developers address this issue in a funny way in a YouTube comment reply: “This is NOT a stereo effect its one that works even when you have one eye shut so it records on camera perfectly.” They’re missing the flip side, namely that it only works when you have one eye shut or are recording on camera.

An extension of this issue is that HoloLamp also does not work for multiple users, as it can only create one viewpoint shared by all of them. Meaning, do not expect to be able to put a HoloLamp on a table between you and a friend and play holographic chess. I’d wager you’d get a better effect from using a tablet computer running a regular 2D or 3D chess game.

I am not trying to dismiss HoloLamp or the impressive technology behind it — after all, the AR Sandbox is based on that same technology — but trying to rein in unrealistic expectations. While it has many potential applications, HoloLamp is not a holographic projector. Or, in other words, try before you buy.

RealView Holographic Augmented Reality

The other new holo-thing is an announcement by RealView about turning their previous desktop holographic display into an augmented reality headset, pointed out to me by Road To VR‘s Ben Lang. This new and apparently yet-unnamed thing is similar to Magic Leap‘s AR efforts in two big ways: one, it aims to address the issue of vergence-accommodation conflict inherent in current VR headsets such as Oculus Rift or Vive, or AR headsets such as Microsoft’s HoloLens; and two, we know almost no details about it. Here they explain vergence-accommodation conflict:

Note that there is a mistake around the 1:00 minute mark: while it is true that the image will be blurry, it will only split if the headset is not configured correctly. Specifically, that will not happen with HoloLens when the viewer’s inter-pupillary distance is dialed in correctly.

Unlike HoloLamp and pretty much everybody else using the holo- prefix or throwing the term “hologram” around, RealView vehemently claims their display is based on honest-to-goodness real interference-pattern based holograms, of the computer-generated variety. To get this out of the way: yes, that stuff actually exists. Here is a Nature article about the HoloVideo system created at MIT Media Lab.

The remaining questions are how exactly RealView creates these holograms, and how well a display based on holograms will work in practice. Unfortunately, due to the lack of known details, we can only speculate. And speculate I will. As a starting point, here is a demo video, allegedly shot through the display and not post-processed:

I say allegedly, but I do believe this to be true. The resolution is surprisingly high and quality is surprisingly good, but the degree of transparency in the virtual object (note the fingers shining through) is consistent with real holograms (which only add to the light from the real environment shining through the display’s visor).

There is one peculiar thing I noticed on RealView’s web site and videos: the phrase “multiple or dynamic focal planes.” This seems odd in the context of real holograms, which, being real three-dimensional images, don’t really have focal planes. Digging a little deeper, there is a possible explanation. According to the Wikipedia entry for computer-generated holography, one of the simpler algorithms to generate the required interference patterns, Fourier transform, is only able to create holograms of 2D images. Another method, point source holograms, can create holograms of arbitrary 3D objects, but has much higher computational complexity. Maybe RealView does not directly create 3D holograms, but instead projects slices of virtual 3D objects onto a set of image planes at different depths, creates interference patterns for the resulting 2D images using Fourier transform, and then composes the partial holograms into a multi-plane hologram. I want to reiterate that this is mere speculation.

This would literally create multiple focal planes, and allow the creation of dynamic focal planes depending on application or interaction needs, and could potentially explain the odd language and the high quality of holograms in above video. The primary downside of slice-based holograms would be motion parallax: in a desktop system, the illusion of a solid object would break down as the viewer moves laterally to the holographic screen. Fortunately, in head-mounted displays that screen is bolted to the viewer’s head, solving the problem.

So while RealView’s underlying technology appears legit, it is unknown how close they are to a real product. The device used to shoot above video is never shown or seen, and a picture from the web site’s medical section shows a display that is decidedly not head-mounted. I believe all other product pictures on the web site to be concept renders, some of them appearing to be (poorly) ‘shopped stock photos. There are no details on resolution, frame rate, brightness or other image specs, and any mention of head tracking is suspiciously absent. Even real holograms need head tracking to work if the holographic screen is moving in space by virtue of being attached to a person’s head. Also, the web site provides no details on the special scanners that are required for real-time direct in-your-hand interaction.

Finally, there is no mention of field of view. As HoloLens demonstrates, field of view is important for AR, and difficult to achieve. Maybe this photo from RealView’s web site is a veiled indication of FoV:

I’m just kidding, don’t be mad.

In conclusion, while we know next to nothing definitive about this potential product, computer-generated holography is a thing that really exists, and AR displays based on it could be contenders. Details remain to be seen, but any advancements to computer-generated holography would be highly welcome.