Moving Beyond the Keyboard and Mouse?

Computers are getting embedded into different objects and since we cannot connect a keyboard and mouse to every object around us, we need to find other interfaces. The current solution to interact with smart objects, a.k.a. IoT, is through voice recognition which obviously has limitations such as public use. Let’s take a look at the methods that researchers and companies are working on at the moment.

Touch

Advances in multi-touch technology and multi-touch gestures (like pinching) have made touch screen the favorite interface. Researchers and startups are working on a better touch experience, from understanding how firm your touch is, which part of your finger is touching, and whose finger is touching.

iPhone’s 3D Touch detects force. Source Giphy.

Qeexo can understand which part of your finger is touching the screen

One of my favorite methods is Swept Frequency Capacitive Sensing (SFCS) developed by Professor Chris Harrison at Carnegie Mellon University.

Voice

DARPA funded research in this area in the 70s! but voice until recently was not useful. Thanks to deep learning, now we have got pretty good at voice recognition. The biggest challenge with voice at this moment is not transcribing, but rather perceiving meaning based on the context.

Hound does a great job at contextual speech recognition

Eye

In eye tracking, we either measure the gaze (where one is looking) or the motion of the eye relative to the head. With the reduction in the cost of cameras and sensors as well as increasing popularity of virtual reality eyewear, eye tracking as an interface is become useful.

Eyefluence, which was acquired by Google, allowed you to navigate virtual reality by tracking your eyes

Tobii, which had an IPO in 2015, works with consumer electronics manufacturers to embed their eye tracking technology. Image source: Flickr.

Gesture

Gesture control is the human-computer interface closest to my heart. I have personally done scientific research on various gesture control methods. Some of the technologies used for gesture detection are:

Inertia Measurement Unit (IMU)

The data from accelerometer, gyroscope, and compass (all or some of them) are used to detect gestures. The need for recalibration and lower accuracy are some of the problems with this method.

A new research by CMU’s Future Interfaces Group shows spectacular classification by using a high sample rate accelerometer data.

Infrared+Camera (Depth Sensor)

Most of the cool gesture detection systems that we have seen use a combination of a high-quality camera plus an infrared illuminator and an infrared camera. Basically how it works is that it projects thousands of small dots in the scene, and based on how far an object is, the distortion is different (there are different methods like ToF that I will not go into). Kinect, Intel’s RealSense, Leap Motion, Google’s Tango, all use some variation of this technology.

Leap Motion is a consumer device for Gesture Control.

Apple has taken this one step forward by embedding all this in the front camera of iPhone X for FaceID.

Electromagnetic Field

In this method, the user’s finger or body acts as a conductive object that distorts an electromagnetic field which is produced by putting transmitter and receiver antennas in an object.

AuraSense uses 1 transmitter and 4 receiver antenna in a smartwatch for gesture control

Radars

Radars have long been used to track objects, from airplanes to ships, and cars. Google’s Advanced Technology and Projects (ATAP) Group has done a remarkable job by shrinking radar into an 8mm by 10mm microchip. This general gesture control chipset can be embedded into smartwatches, TVs, and other objects for gesture tracking.