Reconstructed faces using data from memory-centered area of the brain

Reading minds is an ability only found in comic book heroes.

But new researcher has revealed that computer can now analyse brain scans and work out who a person is thinking about.

The AI system can even create a digital portrait of the face in question.

Scroll down for video

Researchers have reconstructed a face after peering into the mind of another by extracting latent face components from neural activity and using machine learning to create digital portraits. Researchers worked with more than 1,000 coloured images of different human faces

HOW THE SYSTEM WORKS Researchers used an innovative form of fMRI pattern analysis to test whether lateral parietal cortex actively represents the contents of memory. Using a large set of human face images, the first extracted latent face components, known as eigenfaces. Then machine learning algorithms were used to predict face components from fMRI activity patterns and reconstruct images of individual faces in digital portraits. Advertisement

The researchers first pinpointed the area of the brain responsible for processing faces.

‘Recent findings suggest that the contents of memory encoding and retrieval can be decoded from the angular gyrus (ANG), a subregion of posterior lateral parietal cortex,’ reads the study published in The Journal of Neuroscience.

'Visually perceived faces were reliably reconstructed from activity patterns,' wrote Hongmi Lee and Brice A. Kuhl from the Kuhl Lab at the University of Oregon.

Researchers began their work with more than 1,000 coloured images of different human faces.

During the first part of the study they showed participants one face after another while performing fMRI scans and recorded neural responses during this time.

'Subjective assessment of reconstructed faces revealed specific sources of information (e.g., affect and skin color) that were successfully reconstructed in ANG,' they wrote.

'Strikingly, we also found that a model trained on ANG activity patterns during face perception was able to successfully reconstruct an independent set of face images that were held in memory.

The set of faces were decomposed into 300 face components, or eigenfaces.

‘Using an approach inspired by computer vision methods for face recognition, we applied principal component analysis to a large set of face images to generate eigenfaces,’ explain the researchers.

They then modeled relationships between eigenface values and patterns of fMRI activity.

'Activity patterns evoked by individual faces were then used to generate predicted eigenface values, which could be transformed into reconstructions of individual faces.’

Each eigenface is linked to a statistical aspect of the data and the neural activity associated with each eigenface was determined by the AI.

During the first part of the study, they showed participants one face after another while performing fMRI scans – they also recorded neural responses during this time. Each eigenface is linked to a statistical aspect of the data and the neural activity associated with each eigenface was determined in a machine learning process

For the 'mind reading' portion of the study, the team gave subjects a whole new group of faces that they had never seen before.

The neural responses were analyzed to create eigenfaces, which were then switched to build a final digital portrait.

This process is similar to how the mind’s eye sees a person, as object recognition has to endure a few stages, from the moment we lay eyes on it to the point we know exactly what it is.

For the second part, participants were asked to think of any person's face, which researchers found could also be reconstructed using information from a memory-centered area of the brain --the angular gyrus. Although these reconstructions weren’t as detailed as the first, the team said this approach is still powerful

For the second part, participants were asked to think of any person's face, which researchers found could also be reconstructed using information from a memory-centered area of the brain --the angular gyrus.

Although these reconstructions weren’t as detailed as the first, the team said this approach is still powerful.

They were also able to create scatter-plot charts that reflected the properties and features of the original face and reconstructed faces.

They were also able to create scatter-plot charts that reflected the properties and features of the original face and reconstructed faces

‘Strikingly, we also found that a model trained on ANG activity patterns during face perception was able to successfully reconstruct an independent set of face images that were held in memory, said researchers.