Tone deaf people often fail to hear emotional messages such as sadness or annoyance in speech, relying instead on facial cues or body language, a new study has found.

The findings suggest music and language, usually thought to be controlled by two different parts of the brain, may in fact be more closely related.

“Music and language were thought to be completely separate but it turns out that they may share neural resources related to emotional communication, possibly because they have a shared evolutionary history,” said lead author Bill Thompson, a music and brain expert from the ARC Centre of Excellence in Cognition and its Disorders at Macquarie University.

The study, published today in the Proceedings for the National Academy of Sciences, involved playing recorded phrases to a control group of 12 and another group of 12 people with congenital amusia or tone deafness.

The recorded phrases were neutral in content – such as “the boy and the girl went to the store to fetch the milk for lunch” – but read in a variety of different vocal tones to hint at annoyance, sadness, tenderness or other emotional states.

The tone deaf participants were significantly worse at detecting the emotional subtext in the spoken phrases.

In a separate questionnaire, the tone deaf subjects said that they struggled with this problem in their daily lives, reporting that facial cues, body language and the pace of speech were more useful in determining hidden meaning in speech.

“We all know people who don’t get our jokes because they don’t pick up on irony or sarcasm,” said Dr Thompson.

The study’s findings may go against mainstream beliefs that music and language are separate but hark back to a theory proposed by Charles Darwin that “language and music evolved from an earlier precursor or ‘musical protolanguage’ that was used in courtship and territoriality and in the expression of emotion”, the study said.

Nicholas Bannan, Professor in Music Education at the University of Western Australia said the findings were exciting.

“This is entirely compatible with what Darwin predicted,” he said.

“The authors are moving toward a view that in human evolution, song must have existed before language. There’s a lot of contention about this.”

Alan Harvey, Winthrop Professor of Anatomy, Physiology and Human Biology at the University of Western Australia, said the study showed tone deafness may be especially problematic in tonal languages like Mandarin or some African languages.

“If this is an issue for people with tone deafness, think how poor our communication can be when we send text messages to each other,” he said.

“Email and text is eliminating 90% of the methods of communication we use, so imagine how bad it is then.”