What's the science?

What happens in the brain when we speak? The ventral sensorimotor cortex (vSMC) in the cortex encodes neural activity underlying movement of muscles in our lips, jaw, tongue, and larynx (these are the ‘articulators’). Neural activity of the vSMC corresponding to the speaking of isolated syllables has well been studied. However, how each articulator moves in conjunction with other articulators in complex patterns to form natural language has not been studied. This week in Neuron, Chartier, Anumanchipalli, and colleagues recorded brain activity while participants spoke sentences to decipher how articulator movements worked together, and to understand the corresponding patterns of vSMC activity.

How did they do it?

Five epilepsy patients who had electrodes placed on the surface of their brain (ECoG; electrocorticography) as part of their clinical treatment participated in a task that involved speaking a wide variety of sentences out loud. In order to know how each participants’ vocal tract articulators were likely moving during the production of different sounds, the authors used a technique called acoustic-to-articulatory inversion (AAI) in which a statistical model was created to infer the likely movements of the vocal cords during the production of particular sounds. The authors took care to improve upon past AAI models to create a model with high predictive accuracy; they used a deep learning (machine learning) approach on a publicly available dataset (in which vocal tract movements of participants were monitored) to create the model, and then used the AAI model for their own participants (who did not have their vocal tract movements monitored - this often interferes with simultaneous recording from electrodes in the brain). They then used the AAI model of muscle activity in their five participants to predict activity of electrodes placed on the surface of the brain.

What did they find?

Using the AAI model to infer articulator movements in the five participants, the authors found that activity in the vSMC (during speech) was significantly predicted by a movement trajectory model, while activity in other regions of the cortex was not. Next, they looked at patterns of articulator co-activation. They found that patterns of multiple artriculators working together described the activity of vSMC electrodes better than activity of single articulators, indicating that coordinated movements of multiple articulators together was closely related to brain activity. It is commonly thought that one body part corresponds to one location in the sensorimotor cortex, but this finding suggests multiple coordinated movements correspond to one location together. They also found that electrodes on the brain’s surface classified into four groups by the patterns of articulator movement they encode; each cluster represents a different pattern of articulator muscles activation. When activation is coordinated, these articulators are together responsible for different configurations of the vocal tract. These groups of electrodes also clustered spatially over the vSMC, indicating that different parts of the vSMC are responsible for different vocal tract constrictions. Finally, they found that the different modeled movement patterns corresponding to each cluster of electrodes appeared to represent ‘out and back’ motions, allowing the muscle to reach a particular position (during speech) to shape the vocal tract in a certain way, and then returning straight back to their original position.