But Eagleman is quick to point out that the vest isn't just translating the sounds into a code—the patterns felt aren't a "language" to be interpreted like braille. In fact, the device doesn't use a specific language; it responds to all ambient noises and sounds.

"The pattern of vibrations that you're feeling [while wearing the VEST] represent the frequencies that are present in the sound," he said. "What that means is what you're feeling is not code for a letter or a word—it's not like morse code—but you're actually feeling a representation of the sound."

It's an idea that hasn't been done in wearable technology just yet. The Apple Watch may assign different patterns of vibrations to mean different things—one pattern for an incoming text, for example, and another for an incoming tweet—but that's still assigning meaning to feeling. The VEST, Eagleman emphasized, doesn't follow that theory.

In order to create something that can transform all sounds into vibrations, Eagleman needed plenty of hardware. To help shrink the parts built into the vest, Eagleman enlisted six electrical and computer engineering students from Rice University to work with his lab. Eagleman, the team, and Scott Novich, a doctoral student working in Eagleman's lab, created prototypes, with the latest to be unveiled this week:

The task hasn't been easy. "The idea itself was very simple: It's taking sound with the phone, doing some calculations to it, and sending it over to the vest which sends vibrations to the body," Eric Kang, one of the Rice student team members, told me. "But when we tried to build it, all we encountered were issues after issues."

Space issues, mostly. The VEST had to remain as compact, small, and light as possible while comfortably carrying the 32 to 48 transducers and motors required to transmit the vibrations and the circuitry that receives the signals from the app.

So far, it all works, Eagleman said. The team tested a prototype on a 37-year-old deaf man who, after five days of wearing the VEST, understood the words said to him out loud by feeling the vibrations because, as Eagleman put it, "his brain is starting to unlock what the data mean."

That "unlocking" phenomenon, like adding a new sense, is hard to explain. How do a series of vibrations that supposedly reflect sound eventually have meaning when there's no language assigned to them? How does the brain on the first day have no idea what a couple of vibrations on, say, the lower back means, but by the fifth day, know that they form a specific word?

"My view is that the brain is a general-purpose computational device," Eagleman told me. "You could take any kind of data stream and the brain will figure it out. I consider it the biggest miracle no one's heard of."

And that miracle could have more applications than simply allowing people to "feel" sound. John Yan, another Rice student team member, says gaming could be a lucrative field for the haptic devices. "Controllers rumble," he points out. "Virtual reality could be the most immediately commercializable field." Eagleman, meanwhile, thinks the VEST can unlock robotics by helping humans feel what robots feel. Pilots controlling a quadcopter or drone could "sense" the robot's movements from the ground. Astronauts could "feel" the health of the International Space Station through a series of vibrations that report on the its status. People could "see" 360 degrees, not by using their eyes, but by using bluetooth or wi-fi to pick up on some other form of feedback that humans can't sense yet.

Sure, all of those potential applications have a tinge of science fiction to them, but Eagleman himself has donned the vest and "felt" what it can do. At the end of his TED talk, he heard the audience's applause. But with the VEST on, he felt the tingling of the vibrations moving across his back, too.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.