AI-enabled devices are now part of our everyday lives, and many researchers believe the attendant human-machine interfaces will evolve far beyond the keyboard or touch screen. Voice has emerged as the clear frontrunner in this race, but there are also studies focused on eye-tracking and even brain waves. Researcher Marc Teyssier meanwhile is thinking skin.

Why skin? Well. skin is a fundamental biological interface. Humans use our two square meters of skin to sense and interact with the world more than one might imagine. As we grow more intimate with our digital devices, maybe it’s only fair to share our skin with them?

A new paper from researchers at the University of Bristol, Telecomm ParisTech and Sorbonne University introduces a multi-layer, silicone “SkinOn” membrane that mimics the layers of human skin. The paper shows how a slab of the faux flesh can be applied to devices to enable a variety of novel user input gestures.

But why?

“Artificial skin has been widely studied in the field of Robotics but with a focus on safety, sensing or cosmetic aims,” explains lead author Teyssier in a University of Bristol press release. “This is the first research we are aware of that looks at exploiting realistic artificial skin as a new input method for augmenting devices.”

The artificial skin is made up of a surface textured layer, an electrode layer of conductive threads, and a hypodermis layer, allows devices to ‘feel’ the user’s grasp — its pressure and location. It can also detect interactions such as tickling, scratching, even twisting and pinching; and provide sensing (tactile and kinesthetic) feedback.

Researchers created a Skin-On phone case, smart watch, and computer touch pad to demonstrate how touch gestures on the interface can enable computer mediated communication.

The researchers say the machine learning based hardware and software toolkit used for Skin-On can identify eight different touch gestures. It can also detect variations on a particular gesture, for example whether it’s a gentle pinch or a hard squeeze, to associate the gestures with emotions.

Researchers created a messaging app that allows users to send different emojis based on how they stroke their derma-glazed device. “The intensity of the touch controls the size of the emojis. A strong grip conveys anger while tickling the skin displays a laughing emoji and tapping creates a surprised emoji,” explains Teyssier.

While further tests are needed to evaluate system robustness, the researchers are already exploring ways to make the skin even more realistic. This they hope to achieve by embedding hair and temperature features. Skin-On could be the first frontier tech hardware to produce goose-bumps both on users and itself.

This is not the first time Teyssier has explored human-machine communication through anthropomorphic means. Last year he showcased the robotic finger device MobiLimb, which is designed to overcome the “static, passive, motionless” limitations and add new capabilities to mobile devices. Again, the work looks something like a conceptual art project.

MobiLimb gives smartphones the finger

“In my research,” Teyssier explains, “I question the relationship between technology and humans through human-like devices.”

To artificially reproduce skin’s sensing capabilities is an ambitious goal, and fleshy smartphones seem as good a lab as any for the task — whether the tech’s ultimate applications are in communication, the uncanny valley, biotech or beyond.

The paper Skin-On Interfaces: A Bio-Driven Approach for Artificial Skin Design to Cover Interactive Devices is available here.