The great promise of the smartwatch and its intelligent brethren is that they will unite everyday objects in a seamless web of connectivity. But there is a glaring hole in this plan: Our lives remain filled with countless dumb objects—door handles, lamps, pots and pans—that have no way of communicating with our smart objects. One solution is to make everything intelligent. A more discerning approach is to design objects that are clever enough to glean information from the dumb objects around them.

This is the idea behind Em-Sense, a technology Carnegie Mellon's future interfaces group developed with Disney Research. Outfitting a modified smartwatch with EM-Sense created a device that could differentiate between objects the wearer touched.

It sounds like magic, but the technology is straightforward. Most electromechanical objects—electric toothbrushes, your laptop's trackpad, power tools—emit electromagnetic signals when used. Things like ladders and door handles, which aren't electric but are conductive also can emit an electromagnetic signature by receiving the EM radiation present in the surrounding environment. The human body is similarly conductive; touch an object that emits electromagnetic noise and that object's signal courses through your body. Now here's the crucial bit: Different objects emit slightly different levels of electromagnetic noise. The team embedded its smartwatch prototype with a modified radio chip that can sense these low-band electromagnetic signals. When strapped to your wrist, the watch uses a conducting electrode to sense the signal passing through your body. That signal is sent to a radio chip, and converted into digital data. By matching incoming signals to a library of electromagnetic signatures, the watch can identify what you're touching, in real time.

Gierad Laput, a lead researcher on the project, says this could lead to a new wave of context-based applications that are more accurate and nuanced than those using RFID or GPS. “A lot of context based apps are estimating what the user is doing but not knowing exactly what the user is doing,” he says. Though our devices already know plenty about us—our location, time of day, what’s on our calendar—there's still an element of guesswork any time an app makes a decision on our behalf. Unlike RFID or GPS, which help our devices infer user activity by approximating their location, EmSense can detect what a person is interacting with at any given time. An app that relies on RFID might know you're in your kitchen, but an app that relies on EmSense could determine what you're doing there and act on it. That could be something as simple as a meeting reminder popping up when you touch the door to your office. Perhaps you open your fridge at 7 am, then pick up a pan and turn on the stove. Your smartwatch could reasonably infer that you’re making breakfast and perhaps you’d like to listen to Morning Edition while you cook your eggs.

Laput cites other examples, like unlocking a laptop with a single touch or using the technology to differentiate between two people using the same touchscreen. Ultimately, this technology could be cheaply implemented into smartwatches. Then it would only be a matter of building the library of electromagnetic signatures, Laput says. This is important because right now, he says, there’s just not enough data available for these devices to be truly smart. Something like EmSense, then, could provide a bridge between the connected and unconnected world and add an additional layer of information that developers can draw on someday. “It just makes the data even smarter,” says Laput.