A paralysed man has “spoken” three different vowel sounds using a voice synthesiser controlled by an implant deep in his brain.

If more sounds can be added to the repertoire of brain signals the implant can translate, such systems could revolutionise communication for people who are completely paralysed.

“We’re very optimistic that the next patient will be able to say words,” says Frank Guenther, a neuroscientist at Boston University who led the study along with Philip Kennedy at Neural Signals, a firm based in Duluth, Georgia, that produces neural implants.

Conventional speech

Eric Ramsey is 26 and has locked-in syndrome, in which people are unable to move a muscle but are fully conscious.


A brain implant, which requires invasive surgery, may sound drastic. But lifting signals directly from neurons may be the only way that locked-in people like Ramsey, or those with advanced forms of ALS, a neurodegenerative disease, will ever be able to communicate quickly and naturally, says Guenther.

Devices that rely on interpreting residual muscle activity, such as eye blinks, are no good for people who are completely paralysed, while those that use brain signals captured by scalp electrodes are slow, allowing typing on a keyboard at a rate of one to two words per minute.

“Our approach has the potential for providing something along the lines of conventional speech as opposed to very slow typing,” he says.

Messy signals

His team’s breakthrough was to translate seemingly chaotic firing patterns of neurons into the acoustic “building blocks” that distinguish different vowel sounds. Ramsey, who suffered a brain-stem stroke at the age of 16, has an electrode implanted into a brain area that plans the movements of the vocal cords and tongue that underlie speech.

Over the past two decades, the team has developed models that predict how neurons in this region fire during speech. Using these predictions, they were able to translate the firing patterns of several dozen brain cells in Ramsey’s brain into the acoustical building blocks of speech.

“It’s a very subtle code; you’re looking over many neurons. You don’t have one neuron that represents ‘aaa’ and another that represents ‘eee’. It’s way messier than that,” Guenther says.

Next, Guenther’s team provided Ramsey with audio feedback of the computer’s interpretation of his neurons, allowing him to tune his thoughts to hit a specific vowel. Over 25 trials across many months, Ramsey improved from hitting 45 per cent of vowels to 70 per cent.

Listen: Eric Ramsey repeating target vowel sounds here

Laptop control

Of course, the ability to produce three distinct vowels from brain signals won’t allow for much communication, let alone real-time natural conversation. But Guenther says technological improvements should have a next-generation decoder producing whole words in three to five years.

This next device will read from far more neurons and so should be able to extract the brain signals underlying consonants, says Guenther. The team plan to have it controlled by a laptop, so people can practise speaking at home as much as they like. Two people interested in having this device implanted have already contacted Guenther’s team, he says.

Niels Birbaumer, a neuroscientist at the University of Tübingen, Germany, who has developed a prosthetic that records brain activity beneath the skin to type out words, is sceptical that the new approach will yield fluent speech.

He also worries about its reliance on an invasive brain surgery. “In most cases an invasive procedure like this where you hurt the brain is not necessary,” he says.

Guenther agrees that “if patients have enough residual movement they can control some sort of device”. But he says that his implant is intended principally for people who suffer from severe forms of paralysis or significant vocal tract damage that means even these interventions won’t work.

Journal reference: PLoS One, DOI: 10.1371/journal.pone.0008218