First responders could send a pair of camera-equipped drones or robots into a burning or unstable building, place them in separate locations, and let the software take over. Running on a laptop with a dual NVIDIA K20 GPUs, it fuses the images into a live virtual scene, using extrapolation to fill in the missing pixels. While the images aren't as pretty as Intel's FreeD replays, users get a continuous video feed that they can rotate around in real time, unlike the still images from the replay tech.

The resulting synthetic view would help personnel find someone trapped in a fire by looking around objects, or even through them, as shown above. Soldiers could also peer over and around obstacles to spot enemies or booby traps. They could then create a plan for a rescue or incursion with better information than from, say, a single camera.

The researchers also think that Virtual Eye tech could be used for sports, but not just in replays. By adding support for additional cameras, networks could do 3D broadcasting in real time, letting you control exactly what you're watching. That would give you something else to do with that pricey VR headset.