Published online 5 March 2008 | Nature | doi:10.1038/news.2008.650

News

Brain activity can be decoded using magnetic resonance imaging.

What are you looking at? Your brain activity should reveal whether you're looking at the cat or the bowl of fruit.

Scientists have developed a way of ‘decoding’ someone’s brain activity to determine what they are looking at.

“The problem is analogous to the classic ‘pick a card, any card’ magic trick,” says Jack Gallant, a neuroscientist at the University of California in Berkeley, who led the study. But while a magician uses a ploy to pretend to ‘read the mind’ of the subject staring at a card, now researchers can do it for real using brain-scanning instruments. “When the deck of cards, or photographs, has about 120 images, we can do better than 90% correct,” says Gallant.

The technique is a step towards being able to see the contents of someone’s visual experiences. “You can imagine using this for dream analysis, or psychotherapy,” says Gallant. Already the results are helping to provide neuroscientists with a more accurate model of how the human visual system works.

If the work can be broadened to developing more general models of how the brain responds to things beyond visual stimuli, such brain scans could help to diagnose disease or monitor the effects of therapy.

Predicting responses

There have been previous efforts at brain-reading using functional magnetic resonance imaging (fMRI), but these have been quite limited. In most such attempts, volunteers’ brain responses were first monitored when looking at a discrete selection of pictures; these brain scans could then be used to determine which picture from this set a person is looking at. This only works when there is a limited number of simple pictures, and when a subject’s response to those pictures is already known.

In the new report, Gallant and his team instead fMRI to model a subject’s brain responses to various types of pictures, and used this to predict responses to novel images1.

“It’s definitely a leap forward,” says John-Dylan Haynes of the Max Planck Institute for Human Cognitive and Brain Sciences in Leipzig, Germany, who also works on decoding the brain’s activity. “Now you can use a more abstract way of decoding the images that people are seeing.”

In the experiment, the brain activity of two subjects (two of Gallant’s team members, Kendrick Kay and Thomas Naselaris) was monitored while they were shown 1,750 different pictures. The team then selected 120 novel images that the subjects hadn’t seen before, and used the previous results to predict their brain responses. When the test subjects were shown one of the images, the team could match the actual brain response to their predictions to accurately pick out which of the pictures they had been shown. With one of the participants they were correct 72% of the time, and with the other 92% of the time; on chance alone they would have been right only 0.8% of the time.

Complex response

The next step is to interpret what a person is seeing without having to select from a set of known images. “That is in principle a much harder problem,” says Gallant. You’d need a very good model of the brain, a better measure of brain activity than fMRI, and a better understanding of how the brain processes things like shapes and colours seen in complex everyday images, he says. “And we don’t really have any of those three things at this time.”

Previous attempts have simply modelled the brain’s response to simple geometric shapes, says Gallant. It’s much harder to understand the brain’s response to more complex, realistic images.

ADVERTISEMENT

A decoding device that can read out the brain’s activity could be used in medicine to assess the results of a stroke or the effect of a particular drug treatment, or to help diagnose conditions such as dementia, by seeing how the function of the brain changes as a result of illness or intervention.

Creating a model of how the brain responds to various stimuli might also be useful in other types of neural processing. “It’s interesting to see how this could extend,” says Haynes, who showed last year that it is possible to predict which of two sums a person was computing in their head2. But it will be a long time yet before it applies to his own work, he says, because “we don’t have a good enough model for intentions".