University of California researchers have devised a computer system with advanced artificial intelligence capabilities that can record signals from the brain when speaking and convert them into understandable words. Electrodes placed on the cerebral cortex have been used to translate brainwaves into words spoken by the computer. This is a remarkable and distinctive development in this regard, and it may be counted on in the near future to help people who have lost the ability to speak.

“The brain translates ideas and what you want to say into muscle movements carried out by the vocal pathways, and this is what we are trying to decode,” says Edward Chang of the University of California, San Francisco.

Use electrodes to decode ideas

The researchers devised a two-step process to decode those ideas using a group of electrodes that were surgically placed on a part of the brain that controlled movement, and the use of a computer (computer) to simulate the function of vocal pathways, in order to reproduce audible speech.

In their study, they worked with five participants who had electrodes on the surface of the motor cortex as part of their epilepsy treatment. These people were asked to read 101 sentences aloud, and included words and phrases covering all sounds in English, while the team recorded the signals sent from the motor cortex while they were speaking.

More than 100 muscles are involved in the speech production process

There are over 100 muscles used to produce speech, and are controlled by several groups of neurons that operate simultaneously with an extremely complex mechanism, so it is not simple to map signals from one pole to one muscle to interpret brain orders for the mouth. Therefore, the team trained an algorithm to reproduce the sound of a spoken word from a set of signals sent to the lips, jaw and tongue based on prior training.

Create audio files based on bookmarks

Once the audio files were created based on the signs, the team asked hundreds of English speakers to listen to the sentences produced from the computer system and identify the words understood.

Listeners record 43% of the experiences perfectly when they had 25 words to choose from, and 21% ideally when they had 50 options. These results gradually improved as the artificial neural network was provided with more training and signals.

Reliance on control signals only

The main advantage of this system compared to the rest of the previous systems is that it depends only on the control signals from the motor areas in the brain that are still sending signals even if the brain belongs to a paralyzed person. Therefore, this device can be of help to people who were able to speak previously, but who lost that ability due to surgery or movement disorders caused their inability to speak. In such accidents, people usually lose control of their muscles.