Robots are now being employed not just for hazardous tasks, such as detecting and disarming mines. They are also finding application as household helps and as nursing assistants. As increasing numbers of machines, equipped with the latest in artificial intelligence, take on a growing diversity of specialized and everyday tasks, the question of how people perceive them and behave towards them becomes ever more urgent.

A team led by Sari Nijssen of Radboud University in Nijmegen in the Netherlands and Markus Paulus, Professor of Developmental Psychology at Ludwig-Maximilians-Universitaet (LMU) in Munich, have carried out a study to determine the degree to which people show concern for robots and behave towards them in accordance with moral principles. Their findings appear in the journal Social Cognition.

According to Sari Nijssen, the study set out to answer the following question: "Under what circumstances and to what extent would adults be willing to sacrifice robots to save human lives?" The participants were faced with a hypothetical moral dilemma: Would they be prepared to put a single individual at risk in order to save a group of injured persons? In the scenarios presented the intended sacrificial victim was either a human, a humanoid robot with an anthropomorphic physiognomy that had been humanized to various degrees or a robot that was clearly recognizable as a machine.

The study revealed that the more the robot was humanized, the less likely participants were to sacrifice it. Scenarios that included priming stories in which the robot was depicted as a compassionate being or as a creature with its own perceptions, experiences and thoughts, were more likely to deter the study participants from sacrificing it in the interests of anonymous humans. Indeed, on being informed of the emotional qualities allegedly exhibited by the robot, many of the experimental subjects expressed a readiness to sacrifice the injured humans to spare the robot from harm.

"The more the robot was depicted as human -- and in particular the more feelings were attributed to the machine -- the less our experimental subjects were inclined to sacrifice it," says Paulus. "This result indicates that our study group attributed a certain moral status to the robot. One possible implication of this finding is that attempts to humanize robots should not go too far. Such efforts could come into conflict with their intended function -- to be of help to us."