Many in the field see the tensions and dilemmas in robot care, yet believe the benefits can outweigh the risks. The technology is “intended to help older adults carry out their daily lives,” says Richard Pak, a Clemson University scientist who studies the intersection of human psychology and technology design, including robots. “If the cost is sort of tricking people in a sense, I think, without knowing what the future holds, that might be a worthy trade-off.” Still he wonders, “Is this the right thing to do?”

We know little about robot care’s long-term impact or possible indirect effects. And that is why it is crucial at this early juncture to heed both the field’s success stories and the public’s apprehensions. Nearly 60 percent of Americans polled in 2017 said they would not want to use robot care for themselves or a family member, and 64 percent predict such care will increase the isolation of older adults. Sixty percent of people in European Union countries favor a ban on robot care for children, older people, and those with disabilities.

Such concerns, if respected and investigated, offer clues to how robots can be tailored to the needs of the people they serve. Only recently have older people begun to be given voice in the design of robots built to care for them. Many are open to having one, even to befriending it; there are hopes they may tell a joke or two, studies show. (“Could we be friends,” one focus group participant cooed to a robotic seal. “Good, good, I love your eyes.”)

But research suggests that many seniors, including trial users, draw a line at investing too much in the charade of robot companionship, fearing manipulation, surveillance, and most of all, a loss of human care. Some worry robot care would carry a stigma: the potential of being seen as “not worth human company,” said one participant in a study of potential users with mild cognitive impairments.

“If the only goal is to build really cool stuff that can increase speed and profit and efficiency, that won’t prioritize human flourishing,” says John C. Havens, executive director of a pioneering global initiative on ethical AI guidelines by the Institute of Electrical and Electronics Engineers.

A main principle of these and other leading guidelines is “transparency,” the idea that humans should know if they are dealing with an algorithm or robot and be able to understand its limits and capabilities. (Call it the anti-Turing test.) One recommendation to industry is for care robots to have a “why-did-you-do-that” button so users can demand an explanation of its actions, from promoting a product to calling the doctor.

Social robots also should carry a notice of potential side effects, the guidelines suggest, “such as interfering with the relationship dynamics between human partners,” a feature that could inspire caregivers to protect those most cognitively vulnerable to a robot’s charms. Such “soft-law” guidelines can help users, caregivers and designers alike better understand what they are dealing with and why, even as we continue to debate the questions of just how social, how humanlike and how transparent we want or need a care robot to be.