If you thought the was invasive, computer scientists are now one step closer to reading our minds.

Researchers from UC Berkeley pieced together the brain patterns of its subjects as they watched YouTube videos, and then produced a YouTube video of their own with the results.

What emerged was like a Surrealist painting: a blurry, dream-like interpretation of reality (see video below).

For the experiment, subjectsthe researchers themselvesspent hours lying still inside a magnetic resonance imaging (MRI) machine, watching two sets of movie trailers. The MRI machine recorded the amount of blood flowing through the visual cortex, the part of the brain that processes visual information.

During the first set, recorded brain activity was fed into a computer to program a "movie reconstruction algorithm," which matched neural activity to what was taking place in the video. The algorithm thus learned to associate certain neural patterns with dynamic informationshapes, images, soundsagainst 18 million seconds of random YouTube videos.

Subjects then watched the same set of trailers as the algorithm pieced together a video based on brain activity recorded by the MRI. The result is a continuous, if abstract, reconstruction of the actual videos.

The key word there is "continuous." Scientists have long been able to reconstruct static photos and images from reading brain patterns, but this is believed to be a first for reading a dynamic visual experience.

It's only a small step towards Jedi mind reading, but the scientists say it paves the way for eventually being able to see what's going on in the mind without visual stimulation: dreams and thoughts.

"Our natural visual experience is like watching a movie," Shinji Nishimoto, lead author of the study, said in a statement. "In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences."