This piece is adapted from an essay that originally ran in the New America Weekly.

What abilities will set humans apart from machines?

It’s been a question at the center of decades of science fiction, and one that’s taken on increasing real-world urgency as we try to anticipate how the advancing artificial intelligence revolution will transform the way we work and live.

An emerging consensus suggests that two characteristics make humans distinct from machines: care and empathy. Machines may trounce humans at repetitive, predictable, and production-heavy jobs. But technology still falls behind with tasks that require context, nuance, constant adaptation, and emotional intelligence.

Because of this, research organizations—including the University of Oxford, McKinsey Global Institute, PwC, and the Shift Commission—predict that even though millions of jobs will likely be partially or fully automated in the coming decades, care-focused professions will, at least in the short term, largely remain the domain of human workers. (Disclosure: The Shift Commission is co-chaired by New America, a partner with Slate and Arizona State University in Future Tense.)

Think of early child educators, psychiatrists, social workers, or nurses—those often poorly paid occupations that undergird the rest of the economy. These kinds of jobs that require a human touch, the thinking goes, will be among the last to be turned over to machines. If ever.

Yet the idea that humans should—or will—dominate empathy-centric professions isn’t entirely realistic. It seems based, at least in part, on a degree of forced reassurance of human superiority—one people seem to like to cling to as fear of A.I. domination spreads. But there’s reason not to hold on too tightly to the idea that caregiving jobs will always be human jobs. Figuring out which positions are better for robots and which are better for humans will help us better understand what care is, what we’re willing to cede to machines, and why we should value this kind of work more.

Some care jobs could indeed be better suited to robots—like those that can be monotonous or hazardous to humans. Nursing assistants, for instance, are 3½ times more likely than other U.S. workers to be injured on the job because of the biological, chemical, and physical hazards they face. It can be exhausting for elder care workers or family members to answer the same questions from dementia patients over and over again. What’s more, humans’ tendency to change jobs, especially in low-paid careers like home care assistants, can cause problems in continuity of care. If a patient’s constantly getting new caregivers, they won’t notice significant changes such as how a patient’s condition shifted over the course of a year. Simply stepping away for a few minutes, too, can create issues. It’s enough time, for example, to miss whether a patient has taken his medication.

The Japanese government has recognized this. In anticipation of care shortages for its rapidly aging population, it’s been pouring money into producing caretaker automatons. In the past few years, Japanese developers have created Robear, a ursinelike machine that can carry patients from their wheelchairs to their beds; HAL , a cheekily named bionic suit designed to assist wearers with challenging motor tasks like getting out of bed or walking; and Paro, a painfully adorable companion bot in the form of a baby harp seal (made famous in some circles by an episode of Aziz Ansari’s Master of None). Japan’s car companies have also contributed to the boom. Honda, for example, recently released ASIMO, a humanlike machine that can push a cart, carry a tray, and turn on lights. Not to be outdone, Toyota came out with an entire line of helper machines, including Human Support Robot, Walk Assist Robot, Care Assist Robot, and Robina and Humanoid, which both help with housework. To round out the family, it also created the eyebrow-raising Kirobo, a baby robot designed to keep childless women, solo drivers, and other supposedly lonely people company.

Plenty of people outside of Tokyo are also investing time and money in similar caregiving technologies. Across the U.S., companies are developing daily activity trackers, automatic pill dispensers, A.I.-assisted phone calls, and home sensors to track unusual activity (has a fridge not been opened in a few days?). There are even startups devoted to assistive robots, such as Hoaloha Robotics’ companion-monitor-helper bot “Robby.”

Still, none of these machines reflect the complex and multilayered interactions involved in human caring. As Leslie Jamison writes in her book of essays The Empathy Exams, “Empathy means realizing no trauma has discrete edges.” She notes that empathy comes from the Greek word empatheia—em (into) and pathos (feeling)—and suggests that the act requires “entering another person’s pain as you’d enter another country, through immigration and customs, border crossing by way of query: What grows where you are? What are the laws?” It’s an adaptive and nuanced behavior that can’t be reduced—at least not yet—to algorithms.

Yet money continues to flow into developing machines that can perform niche caregiving tasks, in part because of fears there may not be enough humans to do them. According to models done by MIT professor Paul Osterman, by 2030, the U.S. will be short approximately 151,000 paid care workers and 3.8 million unpaid family caregivers, mirroring other predictions of major shortfalls.

Osterman says that because technology alone won’t be able to fill this gap in the foreseeable future, we’re going to need to start making human caregiving jobs more attractive. One way to start, he suggests, is by expanding the skills necessary for certain jobs. Take, for instance, home care aides. According to Osterman, aides receive so little formal instruction that they often aren’t permitted to do some basic tasks—even administering eye drops, in some states. Training them to fill other roles, such as basic health and nutrition coaching, could both save families money on expensive hospital visits and help the home care aides earn more, he explained in a PBS op-ed.

The second point may be the most significant. The care economy workforce—composed mostly of women of color—often earns around or even less than minimum wage, frequently trapping them in working poverty. And those paltry wages contribute to a collective idea about just how valuable this work is. Part of the reason we devalue care work has to do with historical biases. Traditionally care jobs were considered the domain of women—motherhood, elder care, housework, and, later, professions like teaching and nursing—because care was seen as innately feminine, something that women could do naturally with little effort or skill.

Today, women still do disproportionately more care work, both paid and unpaid, than men. For example, economists estimate that the average woman spends the equivalent of 23 years more over her lifetime than the average man on uncompensated caregiving tasks like cooking, cleaning, gathering water, and caring for children, the ill, and the elderly. It’s essentially an enormous subsidy—one that, because it’s uncompensated, becomes invisible when nations calculate their economic productivity measures. That’s wild when you consider that the McKinsey Global Institute found that if unpaid caregiving work were compensated at minimum wage, it would add about $10 trillion to global economic output. That’s about 13 percent of global GDP and more than the economic output of Japan, the U.K., and India combined.

Though we may be socialized to assume that the higher the pay scale, the more important your job must be, don’t be fooled. Without caregiving jobs, our economy would grind to a halt.

Advances in technology may make us further devalue this work if we assume that all caregiving tasks can and should be automated. But instead, these advances in tech should help us revalue and better understand the many facets of care work. Automation first requires deconstructing a job and its composite behaviors, a process that can help us decide which parts of this work to cede to algorithms and bots, and which should remain in human hands.

But we can’t make all of these decisions in a lab, especially if we want people to actually use these new technologies. Part of the process has to include evaluating how people react to and interact with automated care. As a 2012 Future Tense article points out, part of the reason care bots aren’t already household fixtures is that their would-be clients, who are often elderly, may be less than accepting of these automated interlopers. They may, for example, find it challenging to communicate with the devices, particularly when it comes to nonverbal cues. Some research suggests that this gap may be generational. Sherry Turkle, an MIT professor who studies the relationship between humans and machines, has found differences since the 1980s in how accepting (and desirous) people are of forming relationships with robots. For instance, when she interviewed a teenage boy in 1983 and 2008 about whether they would choose their father or a robot to offer dating advice, she found the answers unequivocal in both cases. In the 1980s, the clear answer was dad. In 2008, the boy preferred a robot’s advice because, unlike his father, a machine could access a trove of relationship pattern data.

Even if you’re willing to take love advice from a robot, though, you may not want a machine to care for you in your most vulnerable moments. Many—maybe even most—models still lack the capacity to soothe their human users. Sure, some of today’s bots purportedly have the ability to express and interpret emotions, or to provide “emotional value . ” But Albert “Skip” Rizzo, the director of the Medical Virtual Reality group at the University of Southern California’s Institute for Creative Technologies, notes that they’re not at a point where they can meaningfully mimic human intelligence.

He doesn’t think we can rule that out in future models, though. Rizzo sees the processes behind certain kinds of emotional intelligence and empathy signaling less as magic and more as an advanced data-analysis system. People with advanced emotional IQs have this capacity because they take in lots of data points from other people—eye contact; the tone, inflection, and cadence of their voices; breathing patterns—and respond with compassion. It’s a complex process, he says, and he still thinks there’s something unique about a human touch. But he does see potential for advanced A.I. systems to assist or temporarily step in for an overworked human caregiver.

“We want to fill gaps where there isn’t a live provider, or where there is but they don’t have the time or patience to be able to do it perfectly all the time—to take a load off them,” Rizzo said. “A.I. is here to help people be better carers.”

That’s a common refrain among developers and proponents of automated care, too. Many frame future technologies as enhancing, rather than displacing, human capabilities. Especially with the approaching silver tsunami—by 2030 about one-fifth of U.S. residents will be 65 or older, according to a U.S. census estimate—we’re going to need all the help we can get. These tools could help people provide higher-quality care, fill shortages, and empower some populations, like the elderly, to be independent for longer.

University of Virginia professor and sociologist Allison Pugh thinks that as other jobs in the labor market disappear, there’s a possibility that society may finally begin to better value human care work. But, Pugh says, technological advancement may also cause us to further devalue caregiving by relegating too much of our emotional labor to algorithms and machines. She says that she’s concerned about a future in which the affluent can buy highly personalized and technologically enhanced human care while others can only afford a degrading version of automated-only care. Certainly some automation can be beneficial, she notes. But, “If you talk to actual care providers, most of them say the core processes at the base of care require face-to-face human interaction—it involves seeing the other person, bearing witness to who they are and what they’re going through, and responding appropriately,” she said.

There’s a version of the future in which, forced to rethink the nature of work after droves of jobs get outsourced to machines, we once again realize how much caregiving matters and figure out an economic model to properly pay all those who do it. In this version of the future, Pugh predicts, we might actually see care for the critical human work that is “augmented by machines, but not replaced.”

It’s a role that’s just as intrinsically valuable today as it was thousands of years ago, when caring and bonding with each other ensured that our species survived. Fittingly, even as technology advances, the last human job may make us go back to our first.

This article is part of Future Tense, a collaboration among Arizona State University, New America, and Slate. Future Tense explores the ways emerging technologies affect society, policy, and culture. To read more, follow us on Twitter and sign up for our weekly newsletter.