MIT researchers have created a wearable device called AlterEgo that can recognize nonverbal prompts, essentially “reading your mind.” The system is made up of a computer and device that loops around a user’s ear, follows their jawline, and attaches underneath their mouth. The wearable device has electrodes that pick up neuromuscular signals in your jaw and face that are triggered by internal verbalizations (aka saying words in your head) but can’t be seen by the human eye. These signals are then given to a machine learning system that analyzes the data, associating specific signals with words.

“Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?” says Arnav Kapur, a graduate student at the MIT Media Lab in a statement.

Additionally, the system can communicate with the user via a pair of “bone-conducting headphones” by transmitting vibrations from the face to the ear. The headphones are meant to effectively convey information to the user without interrupting their conversation or hearing.

The researchers tested the device with different tasks, including games of chess and basic multiplication and addition problems, using limited vocabularies of 20 words. While the device is quite clever, it’s still limited; the researchers say it has a 92 percent accuracy with only 20 words. They’re hopeful that it will scale up with time. “We’re in the middle of collecting data, and the results look nice,” Kapur says. “I think we’ll achieve full conversation some day.” Another example of using the headset is in selecting a movie to watch by controlling what’s selected on a TV, as demonstrated in the video.

To create the device, the researchers had to figure out the locations on the face that had the most reliable neuromuscular signals. To do so, they asked subjects “to subvocalize the same series of words four times,” and used 16 electrodes at different facial locations to detect the signals. They then generated a code to analyze the data, which found that seven particular places on the face were able to recognize the nonverbal words. The resulting wearable device uses sensors in those locations, though the researchers are working on a device that can do the same with only four sensors along the jaw.

The researchers hope future applications of the device could be as varied as helping people who have disabilities to even being used in high-noise environments like on the flight deck of an aircraft carrier.