370 SHARES Facebook Twitter Linkedin Reddit

HypeVR may be poised to alter the 360 video landscape with their depth-mapped, volumetric video system that lets VR users move in, out and around the captured scene. Watch Fox’s Futurist Ted Schilowitz as he gives one of the first real time demonstrations of the technology.

We’ve followed HypeVR for some time now, first reporting on their incredible looking LiDAR powered, depth mapping camera rig back in early 2015 and once again just recently, after the company released the first ever look at footage captured with its technology.



HypeVR’s proprietary system uses 14 Red Dragon, 6k Video Cameras rig-mounted to capture a 360 degree field of view. Recording at 60Hz currently (with 90Hz planned) the definition of the resulting footage, once stitched would probably be impressive enough in and of itself, but there’s more. HypeVR’s rig is extended to use a Velodyne LiDAR scanner, capable of capturing up to 700,000 points of 3D depth information every second at a range of up to 100M.

The practical upshot of all this is that, the resulting data captured allows any recorded scene to be reassembled and ‘played’ with the scene able to respond in real time to a viewer’s movements – this means parallax within a video and even the ability to move in and out of the scene.

The HypeVR team have just released a video featuring 20th Century Fox’s Futurist Ted Schilowitz, who as it happens co-founded RED, the company which builds the cameras featured on HypeVR’s rig. Schilowitz holds a small tablet, with a scene apparently captured using HypeVR technology playing on it. As he begins moving around, the video (a looping coastal scene) can be seen to respond to his shifts in position, with both parallax and advancement / retreat in and out of the scene displayed.

It’s impressive stuff and the applications for virtual reality video are blindingly obvious. However, with every apparent breakthrough, especially one still largely unseen by the media or public, questions remain. How is HypeVR’s likely vast quantities of data reassembled in such a way as to be transferable and rendered on consumer devices? Is the scene ultimately distilled to a series of simplified geometric surfaces, extrapolated from the LiDAR depth-sensing information and will therefore look poor quality under close inspection?

We’ll have to wait to find out, but it does seem as if HypeVR – up until now perhaps a victim of their own choice of company names – is nearly ready to show the world what they can really do.