Share this

Article Facebook

Twitter

Email You are free to share this article under the Attribution 4.0 International license. University Georgia Institute of Technology

A new way to control text or other mobile apps involves acoustic chirps that go from ring to wristband, like a smartwatch.

The system can recognize 22 different micro finger gestures that could be programmed to various commands—including a T9 keyboard interface, a set of numbers, or application commands like playing or stopping music.

A video demonstration of the technology shows how, at a high rate of accuracy, the system can recognize hand poses using the 12 bones of the fingers and digits 1 through 10 in American Sign Language (ASL).

“Some interaction is not socially appropriate,” says Cheng Zhang, the PhD student in the Georgia Tech School of Interactive Computing who led the effort. “A wearable is always on you, so you should have the ability to interact through that wearable at any time in an appropriate and discreet fashion. When we’re talking, I can still make some quick reply that doesn’t interrupt our interaction.”

The system is also a preliminary step to being able to recognize ASL as a translator in the future, Zhang says. Other techniques utilize cameras to recognize sign language, but that can be obtrusive and is unlikely to be carried everywhere.

“If my wearable can translate it for me, that’s the long-term goal,” Zhang says.

Unlike other technology that requires the use of a glove or a more obtrusive wearable, this technique, called “FingerPing,” is just a thumb ring and a watch. The ring produces acoustic chirps that travel through the hand, which receivers on the watch pick up. There are specific patterns in which sound waves travel through structures, including the hand, that hand poses can alter. Utilizing those poses, the wearer can achieve up to 22 pre-programmed commands.

The gestures are small and non-invasive, as simple as tapping the tip of a finger or posing your hand in classic “1,” “2,” or “3” gestures.

“The receiver recognizes these tiny differences,” Zhang says. “The injected sound from the thumb will travel at different paths inside the body with different hand postures. For instance, when your hand is open there is only one direct path from the thumb to the wrist. Any time you do a gesture where you close a loop, the sound will take a different path and that will form a unique signature.”

Zhang says that the research is a proof of concept for a technique that could expand and improve in the future.

A paper on the research was part of the 2018 ACM Conference on Human Factors in Computing Systems (CHI).

Source: Georgia Institute of Technology