As a researcher who “dabbles across disciplines” (he’s tried and failed to write a short website bio because he doesn’t want to be “put in a box”), Kapur began to think of the human body not as a limitation but as a conduit. He saw the brain as the power source driving a complex electrical neural network that controls our thoughts and movements. When the brain wants to, say, move a finger, it sends an electrical impulse down the arm to the correct digit and the muscle responds accordingly. Sensors can pick up those electrical signals. One just needs to know where and how to tap in.

Kapur knew that when we read to ourselves, our inner articulatory muscles move, subconsciously forming the words we’re seeing. “When one speaks aloud, the brain sends electrical instructions to more than 100 muscles in your speech system,” he explains. Internal vocalization — what we do when we read silently to ourselves — is a highly attenuated version of this process, wherein only the inner speech muscles are neurologically triggered. We developed this habit when we were taught to read — sounding out letters then speaking each word aloud. It’s a habit that’s also a liability — speed-reading courses often focus on eliminating word formation as we scan a page of text.

First observed in the mid-19th century, this neurological signalling is the only known physical expression of a mental activity.

Kapur wondered whether sensors could detect the physical manifestations of this internal conversation — tiny electrical charges firing from the brain — on the skin of the face, even if the muscles involved were located deep in the mouth and throat. Even if they weren’t exactly moving.

The original design of AlterEgo’s armature pinned a grid of 30 sensors to a subject’s face and jaw so that they could pick up the neuromuscular action when the experimenter used his or her inner voice to communicate. Proprietary software was calibrated to analyze the signals and turn them into distinct words.

There was just one problem: In the beginning, AlterEgo’s sensors detected nothing.

Kapur had built the hardware and the software and hoped for the best. But the myoelectrical signals from this silent speech were very weak. It would have been easy to rethink the whole thing at that point. “But,” he says, “we wanted to capture interactions as close to thinking in your head as possible.”

Kapur moved the sensors to different regions of the face, increased their sensitivity, and reworked the software. Still nothing.

One night, Kapur and his brother were testing the device in their Cambridge apartment. Kapur was wearing the device and Shreyas was monitoring the computer screen. They’d rigged the device to track signals in real time so that Shreyas could note the exact moment it picked up something, if anything.

It was getting late. Kapur had been speaking silently into the device for a couple of hours — having programmed it to understand just two words: yes and no — without any meaningful results.

Then Shreyas thought he saw something. A blip on the screen.

“We didn’t believe it,” Kapur says. He turned his back on his brother and repeated the action. “We kept seeing one bump in the signal and thought it was some artifact in the wires. We were really sure this was some sort of noise in the system.”

Were they actually seeing something?

After testing and retesting for the next hour, Kapur was convinced that they’d made contact.

“That was a crazy moment,” he says. They celebrated with a pizza the next day.