A system that turns brain waves into FM radio signals and decodes them as sound is the first totally wireless brain-computer interface.

For now, 26-year-old Erik Ramsey, left almost entirely paralyzed by a horrific car accident 10 years ago, can only express vowel sounds with the system. That’s less than can be accomplished with wired brain-computer interfaces. But it’s still a promising step.

“All the groups working on BCIs are working toward wireless solutions. They are very superior,” said Frank Guenther a Boston University cognitive scientist who helped developed Ramsey’s system.

In the last decade, brain-computer interfaces, or BCIs, have made the jump from speculation to preliminary medical reality. Since Wired reported on quadriplegic BCI pioneer Matthew Nagle four years ago (“He’s playing Pong with his thoughts alone“), the interfaces have been used to steer wheelchairs, send text messages and even to Tweet. They’re so advanced that some researchers now worry about BCI ethics — what happens when healthy people get them? And they’re concerned about the threat posed by hackers.

But as amazing as these early BCIs are, they’re far from street-ready. Systems based on translating electrical signals captured by electrodes on patients’ scalps are notoriously slow, capable of producing about one word a minute. If researchers put electrodes directly into patients’ brains, the results are better — but that raises the possibility of dangerous infection. And from a purely practical point of view, wires just get in the way.

The implant system tested by Ramsey, as described in a paper published Wednesday in Public Library of Science ONE, was originally developed by Philip Kennedy, founder of Neural Signals, a company that specializes in BCIs. Several electrodes are implanted in Ramsey’s cerebral cortex. Beneath the skin of his skull is an amplifier that gathers the electrodes’ signals, and an FM transmitter that sends them to a nearby computer.

Using a neurological model constructed by Guenther, Ramsey’s brain activity is mapped to corresponding mouth and jaw movements. Another program decodes the signals, and synthesizes them in the sound of a tinny, but human-like voice.

“The system produces the sound output in about 50 milliseconds. That’s the time it takes for sound output to come from a motor cortex command in a normal individual,” said Guenther.

The three wires in Ramsey’s brain are only sufficient for making vowel sounds, said Guenther. But the researchers plan to add more electrodes, perhaps as many as 32. That would be more difficult to control, but would also allow Ramsey’s thoughts to better mimic natural tongue and jaw movements, ultimately letting him form consonants as well.

For now, the computer that translates Ramsey’s mental broadcasts is still in a laboratory. “But our goal is to have him transmit directly to a laptop,” said Guenther.

Image: A schematic at left and CT scans at right of the wireless brain-computer interface. PLoS ONE.

Video: Visual and audio feedback is presented to Erik Ramsey. PLoS ONE.

See Also:

Citation: “A Wireless Brain-Machine Interface for Real-Time Speech Synthesis.” By Frank Frank H. Guenther, Jonathan S. Brumberg, E. Joseph Wright, Alfonso Nieto-Castanon, Jason A. Tourville, Mikhail Panko, Robert Law, Steven A. Siebert, Jess L. Bartels, Dinal S. Andreasen, Princewill Ehirim, Hui Mao, Philip R. Kennedy. Public Library of Science ONE, December 9, 2009.

Brandon Keim’s Twitter stream and reportorial outtakes; Wired Science on Twitter. Brandon is currently working on a book about ecosystem and planetary tipping points.