Natural rhythm (Image: Ciril Cincet/Picturetank)

IT’S musical mind-reading. Your patterns of brain activity can show what song you are listening to.

In the area of the brain that processes sound – the auditory cortex – different neurons become active in response to different sound frequencies. So it should be possible to work out which musical note someone is listening to just by looking at this activity, says Geoff Boynton at the University of Washington in Seattle.

To find out, Boynton and his colleague Jessica Thomas had four volunteers listen to various notes, while they used fMRI to record the resulting neural activity. “Then the game is to play a song and use the neural activity to guess what was played,” he says.


They were able to identify melodies like Twinkle, Twinkle, Little Star from neural activity alone, Boynton told the Society for Neuroscience annual meeting in San Diego, California, this week.

The results could help probe the neural roots of people who are tone deaf. This can be a problem for people with cochlear implants, says Rebecca Schaefer, who researches neuroscience and music at the University of California in Santa Barbara.

Another study into the music of the mind, also presented this week in San Diego, suggests that the brain is highly attuned to rhythm and this might explain why we talk at certain speeds.

David Poeppel at New York University and his colleagues monitored brain activity in 12 volunteers while they listened to three piano sonatas. One sonata had a quick tempo, with around eight notes per second, one had five per second, and the slowest had one note every 2 seconds.

The volunteers’ brainwaves – rhythmic oscillations in the activity of neurons – tuned in to the frequency of the notes in the quick and medium-tempo pieces. In other words, if the melody contained eight notes per second, neural activity oscillated eight times per second. But with the slowest piece, neural activity reached two oscillations per second and went no lower.

“The volunteers’ brainwaves tuned in to the frequency of the notes in the quick-tempo tunes”

Poeppel has previously shown that this tuning effect happens when we listen to a conversation: our neural oscillations correspond to the tempo of some signals in speech, such as the number of syllables per second.

The fact that the oscillations did not fall to match the tempo of the slow music suggests there is a minimum pace that the brain can process effectively.

“Here’s an observation: the natural syllabic rate across all languages is about five syllables per second,” says Poeppel. Either the brain has become attuned to the frequencies of speech, or speech has evolved to tune in to the natural frequency of brainwaves. Poeppel backs the second theory: “These neural oscillations are generic properties of the brain’s operating system,” he says. “It seems like speech has taken advantage of that pre-existing architecture.”