University of Yale's researchers have extracted faces from human brain activity with shocking accuracy—an incredible feat that gets us one step closer to record brain videos after a UC Berkeley team reconstructed brains' visions into digital video back in 2011.


According to one of the paper authors—professor of psychology, cognitive science and neurobiology Marvin Chun—"it is a form of mind reading." He says that other brain scans methods "can only tell you they are viewing an animal or a building, not what animal or building. This is a different level of sophistication."

The researchers says this new method "yield strikingly accurate neural reconstructions of faces even when excluding occipital cortex. This methodology not only represents a novel and promising approach for investigating face perception, but also suggests avenues for reconstructing 'offline' visual experiences—including dreams, memories, and imagination—which are chiefly represented in higher-level cortical areas."


Striking precision

This experiment follows the same method as the UC Berkeley experiment, where they captured the brain activity of subjects watching videos using a fMRI (functional magnetic resonance imaging) scanner. Those readings were stored in a database using voxels—a three-dimensional pixels unit like those used to build worlds in Minecraft. A computer then used that database combined with a palette of 18 million YouTube videos to create a composite video using the real time fMRI scans of other subjects. The method worked, effectively "extracting" a crude version of what the person was seeing in real time.

But the UC Berkeley videos, while amazing, weren't too precise. Faces, for example, were just a blurb. That's because, according to the new research at Yale, "subjective perceptual information is tied more closely to higher-level cortical regions that have not yet been used as the primary basis for neural reconstructions. Furthermore, no reconstruction studies to date have reported reconstructions of face images, which activate a highly distributed cortical network."

So they focused on that, mapping those parts, and creatinga method that effectively creates a recognizable face:

Working with funding from the Yale Provost's office, Cowen and post doctoral researcher Brice Kuhl, now an assistant professor at New York University, showed six subjects 300 different "training" faces while undergoing fMRI scans. They used the data to create a sort of statistical library of how those brains responded to individual faces. They then showed the six subjects new sets of faces while they were undergoing scans. Taking that fMRI data alone, researchers used their statistical library to reconstruct the faces their subjects were viewing.


I really can't wait for Project Brainstorm to become a reality (not to talk about all the weird porn.)