In order to see this embed, you must give consent to Social Media cookies. Open my cookie preferences.

Neuroscientists at the University of California, Berkeley, have figured out a way of recreating visual activity taking place in the brain and reconstructing it using YouTube clips.

The team used functional Magnetic Resonance Imaging (fMRI) and computational models to decode and reconstruct visual experiences in the minds of test subjects. So far, it's only been used to reconstruct movie trailers, but it could, it is hoped, eventually yield equipment to reconstruct dreams on a computer screen.


The participants, who were members of the research team (as they had to stay still inside the scanner for hours at a time), watched two sets of movie trailers while the fMRI machine measured blood flow in their visual cortex.

Those measurements were used to come up with a computer model of how the visual cortex in each subject reacted to different types of image. "We built a model...that describes how shape and motion information in the movie is mapped into brain activity," said Shinji Nishimoto, lead author of the study.

After associating the brain activity with what was happening on the screen in the first set of trailers, the second set of clips was then used to test the theory. It was asked to predict the brain activity that would be generated based on the visual patterns on-screen. To give it some ammunition for that task, it was fed 18 million seconds of random YouTube videos.

Then, the 100 YouTube clips that were found to be most similar to the clip (embedded below) were merged together, forming a blurry but reasonably accurate representation of what was going on on-screen. You can see that process in action in the video above. "We need to know how the brain works in naturalistic conditions," said Nishimoto. "For that, we need to first understand how the brain works while we are watching movies."


The technology could be used to try and find out what's going on in the minds of people who can't (or, more sinisterly, won't) communicate verbally. However, Nishimoto admits that we're still "decades" from scanning other people's thoughts and intentions. Oh, and Inception fans will be disappointed too -- the authors

say: "There is no known technology that could remotely send signals to the brain in a way that would be organized enough to elicit a meaningful visual image or thought."

<object width="455" height="338" data="http://www.youtube.com/v/KMA23JJ1M1o?version=3&hl=en_US" type="application/x-shockwave-flash"><param name="allowFullScreen" value="true" />

<param name="allowscriptaccess" value="always" />


<param name="src" value="http://www.youtube.com/v/KMA23JJ1M1o?version=3&hl=en_US" />

<param name="allowfullscreen" value="true" />

</object>