Human fetuses begin to hear sounds that reach them from outside the womb at about 27 weeks. But it wasn’t clear whether fetuses can learn from these sounds in ways that shape speech perception and development during infancy.

Now it appears they can. New research from the University of Helsinki suggests that humans begin to distinguish between sounds before they are even born. Eino Partanen and colleagues explored how pre-natal experiences influence learning. “We wanted to find out what kind of material fetuses can learn in the womb, what kind of neural representations they form,” he said.

People started to become interested in fetal learning in the 1980s. It was only then that researchers picked up on the widespread anecdotal evidence on the topic. There are frequent stories of infants apparently recognizing and responding to music that was played to them before birth.

Alexandra Lamont, who specializes in developmental psychology, specifically that of music, explained this. “Sound can be quite clearly heard in the womb,” she said. “Once the necessary brain development has taken place to enable learning, the fetus certainly can learn music or other sounds before birth.”

“This is one reason why newborn infants have a preference for their own mother’s voice, as they have had extended experience with hearing that voice before birth.”

Partanen wanted to explore this on a more detailed level. “Can babies learn from the music, speech, stories that they hear from the womb?” he said. “We were interested in looking at this from a neurophysiological angle.”

In order to find out, Partanen and his colleagues used basic sounds. The fetuses in their studies were not played opera or told fairy tales: instead, participating families played multiple recordings of a sound several times a week during pregnancy. This sound was the pseudoword “tatata.” Occasionally this sound was varied with a subtle pitch increment in the middle syllable.

Very soon after birth, researchers compared responses to these sounds when they were played to infants who had been exposed to it while in the womb, as well as those who had not. When recording the electrical activity of the brains of the infants using EEG, they found that the infants who had been exposed to the sounds previously reacted much more strongly to them. Furthermore, these infants were capable of discriminating the small pitch differences between the two versions.

This demonstrates, Partanen said, that “infants are capable of learning the small building blocks of language in the fetal stage. We know that this is much more specific than we thought previously—they respond to subtle changes in verbalization.”

Can this be developed into any kind of intervention for infants? Partanen thinks so. Problems such as dyslexia, he thinks, could be approached at a much earlier level. Although there is no way of knowing whether a baby is going to develop dyslexia, there are factors that can indicate whether or not they are at risk. Dyslexia is partly genetic, for example, and it might be possible to develop a prenatal therapy for those at risk, facilitating learning after they’re born.

Dr. Lamont cautioned that this, if possible, would be a long way off. “While phonological skills do seem to be a useful treatment for dyslexia in young children,” she explained, “we would be a long way from finding out if prenatal exposure to systematic sounds would be of any benefit.”

Partenen is hopeful, though. “Our findings do not mean we can make superbabies!“ he said. "But [we] hope to find out if we could help infants that might have specific deficits from a very early stage.”

PNAS, 2013. DOI: 10.1073/pnas.1302159110 (About DOIs).

This story originally appeared at The Conversation.