The mystery of how our brains perceive sound has deepened, now that musicians have smashed a limit on sound perception imposed by a famous algorithm. On the upside this means it should be possible to improve upon today’s gold-standard methods for audio perception.

Devised over 200 years ago, the Fourier transform is a mathematical process that splits a sound wave into its individual frequencies. It is the most common method for digitising analogue signals and some had thought that brains make use of the same algorithm when turning the cacophony of noise around us into individual sounds and voices.

To investigate, Jacob Oppenheim and Marcelo Magnasco of Rockefeller University in New York turned to the Gabor limit, a part of the Fourier transform’s mathematics that makes the determination of pitch and timing a trade-off. Rather like the uncertainty principle of quantum mechanics, the Gabor limit states you can’t accurately determine a sound’s frequency and its duration at the same time.

13 times better

The pair reasoned that if people’s hearing obeyed the Gabor limit, this would be a sign that they were using the Fourier transform. But when 12 musicians, some instrumentalists, some conductors, took a series of tests, such as judging slight changes in the pitch and duration of sounds at the same time, they beat the limit by up to a factor of 13.


This shows that the Fourier transform is not the whole story, says Magnasco. “The actual algorithm employed by our brains is still shrouded in mystery.”

Brian Moore of the University of Cambridge says he is not surprised that the musicians beat the limit: he already assumed that other mechanisms were at work.

Understanding human sound perception could inspire better systems for sound recordings, speech recognition and sonar.

Journal reference: Physical Review Letters, doi.org/kdw