This prototype device represents sounds in the form of distinct vibrations that can be felt on the skin. Facebook

Ever wish you could just “read” incoming smartphone messages by feeling them on your arm? Well, Facebook researchers are creating a device that could do just that, by translating words into something like real-life pokes.

The researchers built a prototype of a cast-like wearable device filled with actuators that, when triggered, cause vibrations on the arm in patterns that match up with certain sounds. In a study, researchers were able to teach people to feel four different phonemes—the individual sounds that make up words in a language—in three minutes. Over more than an hour and a half of training, study participants were able to learn to recognize 100 words, according to Ali Israr, the technical lead for the project.

The work will be presented later this month at the annual CHI conference on human-computer interaction in Montreal.

The project takes cues from Braille and Tadoma (a communication method for people who are both deaf and blind, which involves feeling a speaker’s lips, face, and throat). The idea could eventually lead to, say, a smart watch that delivers specific messages via vibrations (rather than just the occasional buzzes we get today), letting you know what’s happening without interrupting conversations or other activities. It could also help people with hearing and vision impairments get information more easily.

Facebook gave a sneak peek at the project, which comes from its secretive skunkworks hardware division, Building 8, at its F8 developer conference last April. At that time, it had been in the works for six months, and Facebook said it hoped people would eventually be able to use it to distinguish about 100 words.

The technology appears to have improved since then. Israr, who also coauthored the study, said in an e-mail that with the latest research, people were able to learn 100 words with 90 percent accuracy after 100 minutes of training, and some learned 500 words after another 100 minutes.

A video gives an idea of how this works. The prototype is connected to a computer that lets the wearer select different phonemes and sample words, which can then be felt as vibrations on the arm.

Different sounds are represented by sensations from different actuators on the top and bottom of the arm. After study subjects learned to recognize several words, researchers tested them. A question like “What time is the meeting?” was posed on the computer screen, and users had to type in an answer that was provided in the form of vibrations.

The clumsiness of the prototype and the long training time required show how far we are from the smartwatch version of this gadget; it would need to get easier to learn and use, more accurate, and, of course, much smaller.

Lynette Jones, a senior research scientist at MIT and principal investigator for its Cutaneous Sensory Lab, thinks Facebook’s research looks promising, though she points out that skin doesn’t have the same kind of information-processing power as sensors like the ears and eyes. Because of this, she thinks it’s going to be the kind of communication method you only want to use when you absolutely need it.

“People don’t want to be buzzed all the time,” she says.

And it would also have to get a lot faster to really be effective. Right now, Israr said, in addition to making the wearable more compact, the researchers are trying to speed up how quickly it can transmit words to the arm. Currently, it’s limited to just four to 10 words per minute—fine for a short text but hardly fast enough for a more detailed message.