Now, Massachusetts Institute of Technology researchers have devised a radical new approach to brain imaging that reveals what past studies had missed. By mathematically analyzing scans of the auditory cortex and grouping clusters of brain cells with similar activation patterns, scientists have identified neural pathways that react almost exclusively to the sound of music — any music.

Yet for years scientists failed to find any clear evidence of a music-specific domain.

NEW YORK — Given the antiquity, universality, and deep popularity of music, many researchers had long assumed the human brain must be equipped with some sort of music room, cortical architecture dedicated to detecting and interpreting the dulcet signals of song.


The method could be used to dissect any scans from a functional magnetic resonance imaging device, or fMRI — the workhorse of contemporary neuroscience — and so may end up divulging other forms of cortical specialization.

When a musical passage is played, a distinct set of neurons inside a furrow of a listener’s auditory cortex will fire in response, the scientists found. Other sounds, by contrast — a dog barking, a car skidding, a toilet flushing — leave the musical circuits unmoved.

Nancy Kanwisher and Josh H. McDermott, professors of neuroscience at MIT, and their postdoctoral colleague Sam Norman-Haignere reported the results in the journal Neuron.

“Why do we have music?” Kanwisher said in an interview. “Why do we enjoy it so much and want to dance when we hear it? How early in development can we see this sensitivity to music, and is it tunable with experience? These are the really cool first-order questions we can begin to address.”

The researchers showed that their method detected a second neural pathway in the brain for which scientists already had evidence — this one tuned to the sounds of human speech.


They demonstrated that the speech and music circuits are in different parts of the brain’s auditory cortex, where all sound signals are interpreted, and that each is largely deaf to the other’s sonic cues, although there is some overlap when it comes to responding to songs with lyrics.

The paper “takes a very innovative approach and is of great importance,” said Josef Rauschecker, head of the Laboratory of Integrative Neuroscience and Cognition at Georgetown University.

“The idea that the brain gives specialized treatment to music recognition, that it regards music as fundamental a category as speech, is very exciting to me.”

Rauschecker said music sensitivity may be more fundamental than speech perception. “There are theories that music is older than speech or language,” he said. “Some even argue that speech evolved from music.”

“Music-making with other people in your tribe is a very ancient, human thing to do,” he added.

Kanwisher’s lab is widely recognized for its pioneering work on human vision, and the discovery that key portions of the visual cortex are primed to instantly recognize a few highly meaningful objects in the environment, like faces and human body parts.

The researchers wondered if the auditory system might be similarly organized to make sense of the soundscape.

If so, what would the salient categories be? What are the aural equivalents of a human face or a human leg — sounds or sound elements so essential the brain assigns a bit of gray matter to the task of detecting them?