If you ever wanted to see what your brain sees when you’re watching Nyan Cat or a YouTube video, this is it. University of California, Berkeley researchers have recreated videos by scanning a person’s brain.

The researchers used a combination of functional Magnetic Resonance Imaging (fMRI) and computational models to decode and reconstruct a person’s dynamic visual experience. In the experiment, three test subjects where placed in an fMRI machine to measure the blood flow to their visual cortex as they watched videos. To monitor how the patient was being visually stimulated, the scientists divided their brain into small, three-dimensional cubes known as volumetric pixels, or "voxels."

The patient would watch one trailer while a computer program recorded their brain activity and built an algorithm by checking it against the video's visual patterns. Researchers would then show a second clip to test the computer's movie reconstruction algorithm. The end result of the research was a blurry-but-continuous 100-clip reconstruction of the original movie.

Your browser does not support iframes.

This research is a big step towards recreating moving pictures from internal imagery. One day, the technology might be used to reproduce experiences that only exist in our minds such as dreams and memories. Another use for this sort of technology could be to help those who cannot communicate verbally, such as stroke victims or coma patients.

But this sort of technology is still decades away. The computational models are still in development, and to reconstruct images, the patient needs to be placed in a MRI for hours. And as you probably noticed, some recreated images are wildly different then their original counterparts; for example, Steve Martin in a Bobby outfit is recreated as a man in a black t-shirt.

[UC Berkeley and The Gallant Lab via Popular Science]

Like this? You might also enjoy…



Get your GeekTech on: Twitter - Facebook - RSS | Tip us off