You don’t have to think about it: when you speak, your brain sends signals to your lips, tongue, jaw, and larynx, which work together to produce the intended sounds.

Now scientists in San Francisco say they’ve tapped these brain signals to create a device capable of spitting out complete phrases, like “Don’t do Charlie’s dirty dishes” and “Critical equipment needs proper maintenance.”

The research is a step toward a system that would be able to help severely paralyzed people speak—and, maybe one day, consumer gadgets that let anyone send a text straight from the brain.

A team led by neurosurgeon Edward Chang at the University of California, San Francisco, recorded from the brains of five people with epilepsy, who were already undergoing brain surgery, as they spoke from a list of 100 phrases.

When Chang’s team subsequently fed the signals to a computer model of the human vocal system, it generated synthesized speech that was about half intelligible.

A sample of speech generated by decoding a patient's brain signals.

The effort doesn’t pick up on abstract thought, but instead listens for nerves firing as they tell your vocal organs to move. Previously, researchers have used such motor signals from other parts of the brain to control robotic arms.

“We are tapping into the parts of the brain that control these movements—we are trying to decode movements, rather than speech directly,” says Chang.

In Chang’s experiment, the signals were recorded using a flexible pad of electrodes called an electrocorticography array, or ECoG, that rests on the brain’s surface.

To test how well the signals could be used to re-create what the patients had said, the researchers played the synthesized results to people hired on Mechanical Turk, a crowdsourcing site, who tried to transcribe them using a pool of possible words. Those listeners could understand about 50 to 70% of the words, on average.

“This is probably the best work being done in BCI [brain-computer interfaces] right now,” says Andrew Schwartz, a researcher on such technologies at the University of Pittsburgh. He says if researchers were to put probes within the brain tissue, not just overlying the brain, the accuracy could be far greater.

Previous efforts have sought to reconstruct words or word sounds from brain signals. In January of this year, for example, researchers at Columbia University measured signals in the auditory part of the brain as subjects heard someone else speak the numbers 0 to 9. They were then able to determine what number had been heard.

Brain-computer interfaces are not yet advanced enough, nor simple enough, to assist people who are paralyzed, although that an objective of scientists.

Last year, another researcher at UCSF began recruiting people with ALS, or Lou Gehrig’s disease, to receive ECoG implants. That study will attempt to synthesize speech, according to a description of the trial, as well as asking patients to control an exoskeleton supporting their arms.