Building on the notion that people respond to media as if they were real, switching off a robot which exhibits lifelike behavior implies an interesting situation. In an experimental lab study with a 2x2 between-subjects-design (N = 85), people were given the choice to switch off a robot with which they had just interacted. The style of the interaction was either social (mimicking human behavior) or functional (displaying machinelike behavior). Additionally, the robot either voiced an objection against being switched off or it remained silent. Results show that participants rather let the robot stay switched on when the robot objected. After the functional interaction, people evaluated the robot as less likeable, which in turn led to a reduced stress experience after the switching off situation. Furthermore, individuals hesitated longest when they had experienced a functional interaction in combination with an objecting robot. This unexpected result might be due to the fact that the impression people had formed based on the task-focused behavior of the robot conflicted with the emotional nature of the objection.

Introduction

The list of different types of robots which could be used in our daily life is as long as their possible areas of application. As the interest in robots grows, robot sales are also steadily increasing. Personal service robots, directly assisting humans in domestic or institutional settings, have the highest expected growth rate [1,2]. Due to their field of application, personal service robots need to behave and interact with humans socially, which is why they are also defined as social robots. Possible applications for social robots are elderly care [3,4], support for autistic people [5,6] and the service sector for example as receptionists [7] or as museum tour guides [8].

According to the media equation theory [9], people apply social norms, which they usually only apply in interactions with humans, also when they are interacting with various media like computers and robots. Since a robot has more visual and communicative similarities with a human than other electronic devices, people react especially social to them [10–13]. However, besides many profound differences, one major discrepancy between human-human and human-robot interaction is that human interaction partners are not switched off when the interaction is over. Based on media equation assumptions, people are inclined to perceive the robot as an alive social entity. Since it is not common to switch off a social interaction partner, people should be reluctant to switch off the robot they just interacted with, especially when it displays social skills and an autonomous objection against being switched off. According to Bartneck, van der Hoek, Mubin, and Al Mahmud [14], the perceived animacy of the robot plays a central role: “If humans consider a robot to be alive then they are likely to be hesitant to switch off the robot” (p. 218). In their study, participants hesitated three times longer to switch off a robot when it had made agreeable or intelligent suggestions during a cooperative game before. These results indicate that people treat a robot differently depending on how the robot behaves. However, it was not measured to what extent the robot’s social skills and its objection to being switched off influences participants’ reactions. Since the robot’s objection conveys the impression of the robot as autonomous entity, it is of special interest to examine what effect this has on the robot’s interactants when it comes to a situation which is common with electronic devices but hardly comparable to situations with other humans.

To extend previous research as well as media equation findings, the aim of this study is to examine whether an emphatically and rather humanlike behaving robot is perceived as more alive than a machinelike behaving robot and whether this perception influences people’s reluctance to switch off the robot. In both conditions the robot is a social agent as it uses cues from human-human interaction like speech and gestures. Yet, in one condition it focuses more on social aspects of interpersonal relations while in the other condition it exclusively focuses on performing the dedicated task without paying any attention to these social aspects. In the following, the first one is referred to as social interaction while the latter one is called the functional interaction. Moreover, the influence of an objection against being switched off voiced by the robot is analyzed. The robot’s objection is assumed to be evaluated as autonomous behavior with regard to the robot’s own experiences and state, which is usually only ascribed to living beings [15]. Consequently, people should be more disinclined to switch off an objecting robot because this robot should be perceived as being alive and in possession of own feelings and thoughts and it should feel morally reprehensible to act against someone’s will. In addition, people’s personality should influence their perception and behaviors. Technical affinity should lead to a positive attitude towards robots, which should result in reluctance to switch off the robot after the social interaction and after it objects. Negative attitudes towards robots should have the opposite effect on the switching off hesitation time.

In sum, the aim of the current study is to examine to what extent the media equation theory applies to a situation which is common with electronic devices but does not occur during interactions with humans. Moreover, the goal is to investigate whether a robot’s social skills and its protest as a sign of an own will enhance the application of social norms, which will deliver further insights regarding the media equation theory.

Media equation theory When people are interacting with different media, they often behave as if they were interacting with another person and apply a wide range of social rules mindlessly. According to Reeves and Nass [9], “individuals’ interactions with computers, television, and new media are fundamentally social and natural, just like interactions in real life” (p. 5). This phenomenon is described as media equation theory, which stands for “media equal real life” [9] (p. 5). The presence of a few fundamental social cues, like interactivity, language, and filling a traditionally human role, is sufficient to elicit automatic and unconscious social reactions [16]. Due to their social nature, people will rather make the mistake of treating something falsely as human than treating something falsely as non-human. Contextual cues trigger various social scripts, expectations, and labels. This way, attention is drawn to certain information, for example the interactivity and communicability of the computer and simultaneously withdrawn from certain other information, for example that a computer is not a social living being and cannot have any own feelings or thoughts [16]. According to Reeves and Nass [9], the reason why we respond socially and naturally to media is that for thousands of years humans lived in a world where they were the only ones exhibiting rich social behavior. Thus, our brain learned to react to social cues in a certain way and is not used to differentiate between real and fake cues. The Computers as Social Actors research group (CASA-group; [16]), has conducted a series of experiments and found many similarities between real and artificial life. For instance, studies showed that people apply gender-based stereotypes [17] or the social rule of polite direct feedback when interacting with computers [18]. Moreover, the media equation phenomenon has been shown to be applicable to robots as well [19–21]. In an experiment by Lee, Peng, Jin, and Yan [22], participants recognized a robot’s personality based on its verbal and non-verbal behaviors and enjoyed interacting more with the robot which had a personality similar to their own. Eyssel and Hegel [23] further showed that gender stereotypes are also being applied to robots. In line with these findings, Krämer, von der Pütten, and Eimler [24] concluded that “now and in future there will be more similarities between human-human and human-machine interactions than differences” (p. 234). The question arises how people respond to a situation with a robot to which they are not used to from interactions with other humans. Switching off your interaction partner is a completely new social situation because it is not possible with humans and the only equivalences that come to mind are killing or putting someone to sleep. Since most people never interacted with a humanoid robot before, especially never switched one off, they are confronted with an unusual social situation, which is hard to compare to something familiar. On the one hand, reluctance and hesitation to switch off a robot would comply with the media equation theory. However, on the other hand, switching off an electronic device is quite common. Thus, the aim of the current study is to examine the application of media equation theory to a situation which does not occur in human-human interaction. Additionally, to investigate the influence of the robot’s perceived social skills and its personal autonomy, those qualities are enhanced by means of a social versus a functional interaction and an emotional objection to being switched off expressed by the robot.

Negative treatment of robots People tend to treat electronic devices similar to how they would treat a fellow human being [9] and thus, to mistreat a robot should be considered reprehensible [25]. Whether this is the case, has been analyzed in various studies which addressed effects of negative treatment of robots. In a field trial by Rehm and Krogsager [26], the robot Nao was placed in a semi-public place and an analysis of the interactions with casual users revealed a mix of behaviors including rude and impolite behavior. In line with this, further experiments showed similar abusive or inappropriate behavior towards robots or virtual agents which were publicly available [27–30]. However, people also displayed curiosity, politeness, and concern towards the robot. To further examine abusive behavior towards robots, the researchers Bartneck, Rosalia, Menges, and Deckers [31] reproduced one of Milgram’s experiments using a robot in the role of the student. In the original experiments [32], participants were asked to teach a student by inducing increasingly intense electric shocks whenever the student makes a mistake. The student did not actually receive shocks, but acted as if he was in pain and eventually begged the participant to stop the experiment. If the participant wanted to stop, the experimenter would urge the participant to continue. All participants followed the experimenter’s instructions and administered the maximum voltage (450) to the robot, while in the respective experiment by Milgram only 40% induced the deadly electric shock to the student. However, the participants showed compassion towards the robot and general uncomfortableness during the experiment. In a follow up study by Bartneck, van der Hoek et al. [14], participants were asked to switch the robot off. Results showed that they hesitated three times longer when the robot made agreeable and intelligent suggestions during a cooperative game before, but the influence of a social interaction style and of an objection voiced by the robot was not examined. The authors argued that the hesitation is related to the perceived animacy of the robot (“If a robot would not be perceived as being alive then switching it off would not matter”; p. 221). Going one step further in a different study, all participants followed the instruction to destroy a small robot with a hammer [33]. However, the robot used here was a Microbug so that it is questionable whether social norms are being applied. Also, qualitative video analysis showed that most participants giggled or laughed while they were hitting the robot, which could be a release of pressure and is similar to behavior observed in the Milgram experiments. Likewise, in a study by Rosenthal-von der Pütten, Krämer, Hoffmann, Sobieraj, and Eimler [34] participants displayed increased physiological arousal during the reception of a video showing a dinosaur robot (Pleo) being tortured and expressed empathetic concerns for it. In an fMRI study, watching a human or Pleo being mistreated resulted in similar neural activation patterns in classic limbic structures, which suggests that similar emotional reactions were elicited [35,36]. Experiences with a different robot (Sparky) show that people respond with compassion when the robot displays sadness, nervousness, or fear [37]. In conclusion, these findings indicate that people appear to have fewer scruples to mistreat a robot compared to a human, at least under the strict supervision of a persistent authoritarian instructor. The interesting part is that people still react unconsciously to those kinds of situations on several levels as if a living being was being mistreated.

Perceived animacy as influencing factor A robot’s perceived animacy is consistent to the extent the robot is perceived as a life-like being. As Bartneck, Kanda et al. [10] stated: “Being alive is one of the major criteria that distinguish humans from machines, but since humanoids exhibit life-like behavior it is not apparent how humans perceive them” (p. 300). For the perception of animacy, the robot’s behavior is more important than its embodiment [10]. Natural physical movements of a robot are assumed to enhance people’s perception of life-likeness [12] as well as intelligent behavior [10], communication skills and social abilities [38,39]. There are several other characteristics, which more or less influence the perceived animacy of a robot (e.g. agreeableness: [14]; human-like appearance: [40]; personal names, stories and experience: [41]; volition: [42]). However, not many cues are necessary to make us behave around robots as if they were alive. The question arises, whether an either functional or social interaction with a robot influences people’s decision to switch off a robot. A social interaction should provoke the perception of the robot as humanlike and alive, while the functional interaction should make people perceive the robot as machinelike and emotionless. To create a social interaction, different insights from classic social science literature are considered, for example self-disclosure [43–45] and use of humor [46,47], which were also found to have an influence when interacting with technology (self-disclosure: [19,48,49]; humor: [50]). Previous findings suggest that a social interaction with a robot will enhance the robot’s human-likeness and increase its acceptance [4]. Consequently, participants should have more inhibitions to switch off a robot after a social interaction compared to a functional interaction. In particular, the social interaction should enhance the robot’s likeability, which in turn should influence the switching off hesitation and participant’s perceived stress. Thus, the following is hypothesized: H1.1: Individuals rather choose to let the robot stay on, when the interaction before is social compared to a functional interaction. H1.2: Individuals take more time to switch off the robot, when the interaction before is social compared to a functional interaction. H2.1: A social interaction will elicit a higher likeability compared to a functional interaction, which in turn will result in more hesitation time in the switching off situation. H2.2: A social interaction will elicit a higher likeability compared to a functional interaction, which in turn will result in more stress after the switching off situation.

Objection as a sign of autonomy Free will is the “capacity of rational agents to choose a course of action from among various alternatives” and is connected to a person’s autonomy [51] (para. 1). The term autonomy is derived from auto (= self) and nomos (= law), which can be translated to self-rule or self-government [52]. From an objective point of view, electronic devices are not self-governed. Instead, they are told what to do by their users or programmers and there is no autonomous will comparable to the will of a human. However, based on the media equation theory [9], people may treat these devices as if they had a free will when they display certain behaviors which are characteristic for autonomous living beings. Even abstract geometrical shapes that move on a computer screen are perceived as being alive [53], in particular, if they seem to interact purposefully with their environment [54]. According to Bartneck and Forlizzi [15], “autonomous actions will be perceived as intentional behavior which is usually only ascribed to living beings”. People automatically consider autonomously acting agents as responsible for their actions [55]. Moreover, unexpected behaviors, like rebellious actions, are especially perceived as autonomous [56]. Thus, when a robot provides evidence of its autonomy regarding the decision whether it stays switched on or not, it is more likely perceived as a living social agent with personal beliefs and desires. Switching the robot off would resemble interfering with its personal freedom, which is morally reprehensible when done to a human. Thus, people should be more inhibited to switch off the robot when it displays protest and fear about being turned off. Consequently, the following reactions are hypothesized: H3.1: Individuals rather choose to let the robot stay on, when it voices an objection against being switched off compared to when no objection is voiced by the robot. H3.2: Individuals take more time to switch off the robot, when the robot voices an objection to being switched off compared to when no objection is voiced by the robot. In addition to the main effects of the social interaction and the objection of the robot, an interaction effect of these two factors should be considered. During the social interaction, the robot should already elicit the impression of autonomous thinking and feeling by sharing stories about its personal experiences and its preferences and aversions. In combination with the robot’s request to be left on, this impression of human-like autonomy should become more present and convincing. Consequently, the following effects are assumed: H4.1: The intention to switch off a robot is especially low, when the interaction before is social in combination with an objection voiced by the robot. H4.2: People take especially more time to switch off the robot, when the interaction before is social in combination with an objection voiced by the robot.