A Queen’s University researcher is hoping to revolutionize video-conferencing with a human-scale 3D pod that allows people in different locations to communicate as if they are standing in front of each other.

Described as Star Trek-like, the TeleHuman, a decade in the making by Roel Vertegaal, a professor in Human-Computer Interaction, and colleagues at Queen’s Human Media Lab, allows two people to stand in front of life-size cylindrical pods and talk to 3D hologram-like images of each other. Cameras that capture and track 3D video and then convert it into the life-size surround image is what make the process work.

Since the 3D video image is captured in 360 degrees, a person can walk around the pod to see the back or sides of the person they are talking to — unlike the current flat, one-dimensional screens utilized in Skype chats.

“We had a yoga instructor come out and present yoga poses on the display and had other people judge and copy them; with a display like this you can actually do that, whereas with traditional Skype conference you cannot,” said Vertegaal, director of the Human Media Lab.

“In terms of medical imaging, doctors that want to examine patients, musical or dance instruction, there are lots of applications why you’d want this from a serious point of view. But what’s really cool is you have this pod where you can just walk up to someone and boom! There they are — ‘Beam me up Scotty.’

“What you see in the movies is not 3D. You need to be able to walk around objects as you shift your perspective. But this problem you don’t have in a movie theatre because everybody’s sitting still; but if you shift your perspective you’d see something different. And that’s what we’re (overcoming) with this display.”

The team utilized mostly existing hardware — including an array of Microsoft Kinect sensors, a 3D projector, a 1.8 metre-tall translucent acrylic cylinder and a convex mirror — to create the device which projects holographic-like images. The term holographic refers to a specific holographic imaging technique.

They used the same pod to create another application called BodiPod, which presents an interactive 3D anatomy model of the human body which can be explored in 360 degrees around the model through gestures and speech interactions.

When people approach the pod, they can wave their arm or hand to peel off layers of tissue. In X-ray mode, as users get closer to the pod, they can see deeper into the anatomy, revealing the model’s muscles, organs and bone structure. Voice commands such as “show brain” or “show heart” will automatically zoom into a 3D model of a brain or heart.

Vertegaal said he is in discussion with a company to bring the pod to the commercial market.