Facebook has released an update on its ambitious plans for a brain-reading computer interface, thanks to a team of Facebook Reality Labs-backed scientists at the University of California, San Francisco. The UCSF researchers just published the results of an experiment in decoding people’s speech using implanted electrodes. Their work demonstrates a method of quickly “reading” whole words and phrases from the brain — getting Facebook slightly closer to its dream of a noninvasive thought-typing system.

People can already type with brain-computer interfaces, but those systems often ask them to spell out individual words with a virtual keyboard. In this experiment, which was published in Nature Communications today, subjects listened to multiple-choice questions and spoke the answers out loud. An electrode array recorded activity in parts of the brain associated with understanding and producing speech, looking for patterns that matched with specific words and phrases in real time.

You won’t be thought-typing Facebook status updates any time soon

If participants heard someone ask “Which musical instrument do you like listening to,” for example, they’d respond with one of several options like “violin” or “drums” while their brain activity was recorded. The system would guess when they were asking a question and when they were answering it, then guess the content of both speech events. The predictions were shaped by prior context — so once the system determined which question subjects were hearing, it would narrow the set of likely answers. The system could produce results with 61 to 76 percent accuracy, compared with the 7 to 20 percent accuracy expected by chance.

“Here we show the value of decoding both sides of a conversation — both the questions someone hears and what they say in response,” said lead author and UCSF neurosurgery professor Edward Chang, in a statement. But Chang noted that this system only recognizes a very limited set of words so far; participants were only asked nine questions with 24 total answer options. The study’s subjects — who were being prepped for epilepsy surgery — used highly invasive implants. And they were speaking answers aloud, not simply thinking them.

That’s very different from the system Facebook described in 2017: a noninvasive, mass-market cap that lets people type more than 100 words per minute without manual text entry or speech-to-text transcription. Facebook also highlights a Reality Labs-backed headset that reads brain activity with near-infrared light, potentially making a noninvasive interface more likely.

As Facebook says, virtual and augmented reality glasses could use brain reading even in very limited capacities. “Being able to decode even just a handful of imagined words — like ‘select’ or ‘delete’ — would provide entirely new ways of interacting with today’s VR systems and tomorrow’s AR glasses,” the Reality Labs post reads. Facebook isn’t the only big company working on brain-computer interfaces: Elon Musk’s Neuralink recently revealed new work on a threadlike brain-reading implant.

Even if we never see this brain-reading tech in Facebook products (something that would probably cause just a little concern), researchers could use it to improve the lives of people who can’t speak due to paralysis or other issues. “Currently, patients with speech loss due to paralysis are limited to spelling words out very slowly,” said Chang. “But in many cases, information needed to produce fluent speech is still there in their brains. We just need the technology to allow them to express it.”