Technology that harnesses brain activity to produce synthesised speech may benefit individuals who have been robbed of the ability to talk due to a stroke or other medical conditions, researchers claim.

Known as a ‘brain decoder’, the technology is said to read people’s minds and turn thoughts into speech – a tool which could one day help doctors communicate with patients who cannot talk.

Scientists at the University of California, San Francisco (UCSF) implanted electrodes into the brain of volunteers and then decoded signals in cerebral speech centres to guide a computer-simulated version of their vocal tract – lips, jaw, tongue and larynx – to generate speech through a synthesiser.

The results from the volunteers were mostly intelligible, although the researchers have noted that the speech is somewhat slurred in parts.

“We were shocked when we first heard the results – we couldn’t believe our ears,” said UCSF doctoral student Josh Chartier. “I was incredibly exciting that a lot of aspects of real speech were present in the output from the synthesiser.”

Results from the study have raised hope among the researchers that, with improvements, a clinically viable device could be developed for patients with speech loss in the years to come.

“Clearly, there is more work to get this to be more natural and intelligible,” Chartier added, “but we were very impressed by how much can be decoded from brain activity.”

A stroke, ailments such as cerebral palsy, amyotrophic lateral sclerosis (ALS), Parkinson’s disease and multiple sclerosis, brain injuries and cancer sometimes take away a person’s ability to speak.

Such conditions result in some people using devices that track eye or residual facial muscle movements to spell out words letter-by-letter. These methods, however, are slow, delivering typically no more than 10 words per minute in comparison to 100-150 words per minute in natural speech.

The five volunteers who took part in the study were all epilepsy patients. Although they were all capable of speaking, they were given the opportunity to participate as they were already scheduled to have electrodes temporarily implanted in their brains to map the source of their seizures before neurosurgery. Future studies will test the technology on people who are unable to speak.

The volunteers read aloud while activity in brain regions involved in language production was tracked. The researchers discerned the vocal tract movements needed to produce the speech and created a “virtual vocal tract” for each participant that could be controlled by their brain activity and produce synthesised speech.

“Very few of us have any real idea, actually, of what’s going on in our mouth when we speak,” said neurosurgeon Edward Chang. “The brain translates those thoughts of what you want to say into movements of the vocal tract, and that’s what we’re trying to decode.”

The researchers found that during the study, they were more successful in synthesising slower speech sounds such as “sh” and less successful with the abrupt sounds such as “b” and “p”.

Furthermore, the technology did not work as well when the researchers tried to decode the brain activity directly into speech, without using a virtual vocal tract.

“We are still working on making the synthesised speech crisper and less slurred,” Chartier said. “This is in part a consequence of the algorithms we are using, and we think we should be able to get better results as we improve the technology.”

“We hope that these findings give hope to people with conditions that prevent them from expressing themselves that one day we will be able to restore the ability to communicate, which is such a fundamental part of who we are as humans,” he added.

The study has been published in the journal Nature.

In November 2018, a neurotechnology platform that uses artificial intelligence to translate brainwaves into control signals was named as the first winner of a new E&T-backed prize for Innovation of the Year at the IET’s 2018 Innovation Awards which was held in central London.