Berkeley scientists reveal promising speech gains SCIENCE

In experiments whose results may one day provide synthetic speech to people who have lost the ability to speak, UC Berkeley scientists have taught computers to read and reproduce the electrical signals in the brain produced by the sound of the human voice.

The achievement could give future hope to those who have lost their speaking ability because of stroke or paralysis, and allow them to converse once again, the researchers said.

"But this is only the first step toward a goal like that, and it's a long and complicated road we're traveling," said Brian Pasley, a UC neuroscience researcher who is leading the effort.

More immediately, he said, the research is "helping us understand how the normal brain processes the sounds of speech."

Severely wounded soldiers in today's wars have already challenged scientists and engineers to create extraordinary prosthetic arms and legs that move solely by the directed thoughts of the wounded, but fashioning artificial voices that respond to unspoken thoughts will be an even more complex challenge, Pasley and his colleagues said in a report published today in the open-access journal PLoS Biology.

In studying decades of research into how animal brains process sounds, Pasley said, scientists concluded that the nerve cells active in human speech are located in specialized centers of the temporal lobe of the brain.

So Pasley and his colleagues sought the help of patients who had suffered from epileptic seizures or brain tumors and who were undergoing brain surgery at UCSF. Their neurosurgeons were seeking to locate the precise areas of their brains where the seizures were triggered so they could remove the tiny damaged areas.

To do that, Dr. Edward Chang, a neurosurgeon at UCSF, and his colleagues implanted hundreds of tiny electrodes into the temporal lobes of their patients - a normal procedure for this type of surgery.

For his part, Pasley found 15 patients who volunteered to let him record the brain wave signals their implanted electrodes detected from their conversations.

The signals, first collected by the hospital's computers, were later transmitted to Pasley's computer in his Berkeley lab for analysis.

Pasley tested two highly complex computer models of the brain waves that seemed to match the conversations. The volunteer patients then tested the models by speaking single words while the computers reproduced their sounds.

The sounds produced by the best computer model were accurate enough for Pasley and his colleagues to guess the actual words that were spoken in many cases. That was a result of their first efforts, and the researchers expect more accuracy with more repetitions over a longer period, Pasley said.

"This research is a major step toward understanding what features are represented in the human brain," said Robert T. Knight, a professor of psychology and neuroscience at UC Berkeley and a co-author of Pasley's report.

The thought-controlled artificial limbs developed for war veterans came after years of highly complex research, Pasley and his colleagues point out.

"But that work, while not easy, is relatively simple compared to reconstructing language," Knight said. "This experiment takes that earlier work to a whole new level."

Colleagues at the University of Maryland and Johns Hopkins University also contributed to the research.