Pain flickers across people’s faces in inconsistent, contradictory ways. Charles Darwin, ever the meticulous observer, noticed this problem early: “The mouth may be closely compressed, or more commonly the lips are retracted,” he wrote in The Expression of the Emotions in Man and Animals. “The eyes stare wildly as in horrified astonishment, or the brows are heavily contracted.” And the experience of pain differs just as widely as its expression—tolerance is a matter of genetics and life experience. What’s agony for you may be merely uncomfortable for someone else.

Ambiguity has always made pain assessment an inexact science for health care providers, which in turn frustrates the sufferers themselves. A doctor’s assessment may not line up with their sense of the issue; in some cases, patients are told there’s no apparent explanation for their pain whatsoever. Many of these patients, hoping for a second opinion, are turning not to other doctors for answers but to technology.

Emma Grey Ellis covers memes, trolls, and other elements of internet culture for WIRED.

Pain diaries and tracking apps are all over the App and Google Play Stores, advertised to chronic pain patients as ways to identify trends in their symptoms. Other apps render pain as animations that change in intensity and saturation in place of a 1-to-10 scale, in the hope that a more visual metaphor makes pain easier to talk about or describe.

A word you’ll encounter often in this area—not only in these apps and services but in research investigating ways to apply technology to pain assessment and in pain science in general—is “objectivity.” It’s an inherently Silicon Valley notion: Take the subjectivity out of something by applying ostensibly impartial, data-driven technology. Inevitably, the buzzwords have followed, everything from facial recognition and machine learning to the blockchain. This isn’t just classic disruption, though. The call to bring objectivity to the experience of pain comes from the National Institutes of Health, in part as an effort to curb the overprescription of opioids. Some combination of data and machinery may, go the pronouncements of the tech world, do what millennia of humans have been unable to: accurately feel someone else’s pain.

At the moment, the best way to precisely gauge someone’s pain is, quite simply, to ask them about it. But tech can provide some assistance there, too. Janet Van Cleave, whose research at NYU’s Rory Meyers College of Nursing centers on improving cancer patient care, has developed an Electronic Patient Visit Assessment for patients with head and neck cancers. Essentially, the ePVA is a survey on an iPad—tap where it hurts and answer yes-or-no questions about your pain and quality of life. Doesn’t sound that impressive, but the results are. “In patients who are highly symptomatic, web-based measures can help improve survival,” she says. “It’s a powerful tool.”

The reasons why have to do with the physical ways pain is reported. Head and neck cancer patients have difficulty speaking and are frequently tired from treatment. Their doctors get more and better-quality information from them because lifting a single finger to a touch screen is easier than verbally answering questions or writing things down. It’s still a challenge for some, though. “It’s like going through hell,” Van Cleave says. “Their hands shake when they’re pressing on the screen, so we’ve made it extra sensitive.”

According to Van Cleave, their preference for the iPad might extend beyond physical ease. She suspects some patients feel more comfortable telling a machine about their pain and symptoms than another person. This is one of the central, most crucial arguments for telemedicine—that something about tech as intermediary increases comfort. On its own, it’s a good, sensible, testable idea. But—especially in more complicated or algorithmic applications—machines can be just as biased as the humans they’re designed to improve upon and replace.