Introduction

In 2013, the political scientist Jörg Friedrichs expressed his frustration with the Intergovernmental Panel on Climate Change (IPCC). 1 Friedrichs stated that in 2014 the IPCC would release three reports “packed with facts and figures” documenting “what we know about climate change, what we know about the consequences, and how we might deal with them.” He then questioned: “But are we really suffering from a lack of knowledge?”, adding:

Frankly, we have been here before. Every five or six years, the IPCC tries to shake us up with another avalanche of paper. There have been four assessment reports since the first one appeared in 1990, and this one is number five. Every report is more detailed and more confident about the man-made nature of climate change, but essentially it’s more of the same.

The IPCC, he argued, presumes their provision of an increasingly irrefutable “critical mass of knowledge” will promote greater awareness and then “galvanise action …” Herein lies their motivation to provide even more knowledge. However, despite the IPCC’s “unshakeable faith in the transformative power of knowledge,” their track record says it all: their reports have had no tangible effect. For example, general international awareness about human-induced heat trapping gases or the “greenhouse effect” has increased over the last few decades ( Brechin and Bhandari 2011 Capstick et al. 2015 ). Using the world’s second highest per capita heat-trapping gas polluter, the U.S.A. as an example, 2 awareness of the greenhouse effect has increased from 39% in 1986, to 58% in 1988, to 90% in 2006 ( Nisbet and Myers 2007, p. 445 ). Despite this increasing awareness, however, neither U.S. emissions nor global emissions have decreased. Before and across this period, increasing worldwide awareness about climate catastrophe instead correlates positively withemissions. Consider, for example, the following time series graph of total greenhouse gas (GHG) emissions ( Figure 1 ): 3

Friedrichs’ point is that, despite the IPCC’s repeated warnings of ocean acidification, melting icecaps, rising sea levels, species extinction, extreme weather events, inhabitable dead zones, mass migration and climate-related wars, humankind’s collective response has been little more than to continue contributing to the problem. Thus, the key question is why, despite people’s increasing awareness of impending climate catastrophe, have total GHG emissions continued to increase? Several sometimes competing theories have attempted to shed light on this irrationally destructive phenomenon (see Bache et al. 2015 Weintrobe 2013 ; and indirectly, Hardin 1968 ). The argumentative direction of the present paper, we believe, is likely to add to (perhaps even bolster) some of this important research.

The aim of this article is to present a new interpretation of Milgram ’s ( 1963 1974 ) Obedience Studies that we suspect analogously captures, on a microcosmic scale, humankind’s present failure to avert climate catastrophe. As we argue, the common denominator between both the Obedience Studies and climate catastrophe is that both involve powerful figures (Milgram/the carbon-capital elite) utilising manipulative techniques of bureaucratic organisation to push and pull their functionary helpers (the Obedience Study research team and participants/fossil fuel investors, employees and consumers) into contributing to preconceived goal achievement (obtaining a high completion rate/producing and consuming massive quantities of fossil fuels). In both cases, for all these functionary helpers to achieve the goals of the powerful, all must agree to contribute to the infliction of harm on a powerless group (the “shocked” learner/future victims of climate catastrophe). As we show, nearly all the functionary helpers do so because they not only stand to personally benefit, they also suspect that—with so many “others” involved in goal achievement—they can probably contribute to harm-infliction with impunity. In making this analogy, we hope to provide the reader with a new and potentially powerful lens through which to view the seemingly unstoppable problem of climate catastrophe and then to reflect on their own individual experience and ability to adapt.

This article is divided into four main sections. The first section (below) provides a brief overview of Milgram’s baseline procedure. The second section presents a new reinterpretation of Milgram’s Obedience Studies ( Russell 2018 2019 ). The third section illustrates how Milgram’s experiments capture, in the laboratory setting, key causal elements of humankind’s present inability to avert climate catastrophe. The fourth section explores a key difference between the Obedience experiments and climate catastrophe.

Why did most participants in thechoose what astounded viewers of Milgram’s documentary film so confidently identified as wrongdoing—why did most decide to “harm” instead of help an innocent person? Although Blass believes Milgram ’s ( 1974 ) own theory is his book’s “weakest” section ( Blass 2004, p. 216 ), other theoretical contributions have emerged (see Erdos 2013 2015 ). The last five of these references, we believe, hold the most potential to better understanding humankind’s present failure to avert impending climate catastrophe.

Thereafter, using theas his model procedure, Milgram then undertook more than 20 slight variations. Key variations, as far as this article is concerned, include thecondition where, for incorrect answers, teachers were free to inflict shocks of any intensity of their choosing (2.5 per cent complete) ( Milgram 1974, p. 70 ). During the last variation—thecondition—teachers were encouraged to inflict intensifying shocks on a learner who was at least an acquaintance, often a friend, and occasionally a family member (just prior to the start of the experiment learners were covertly informed of the study’s actual purpose and then instructed how to react to being “shocked”). This variation generated a 15 per cent completion rate ( Russell 2014b ). Despite Milgram’s experiments encountering strong methodological criticisms (see Perry 2012 ), because thehas been independently replicated many times (see Blass 2012 ), it has passed the most important criterion of the scientific method.

Milgram found it so difficult to devise a variation where the vast majority of teachers would disobey that he became confident he could run a new and far more disturbing baseline than the. This fifth(or Cardiac) condition was similar to thecondition except when the learner was being strapped into the chair, he mentioned having a mild heart condition. It was also different that the learner’s “pain” from being “shocked” was conveyed by way of intensifying verbal protests. For example, by the 150-volt switch, a panicked learner mentioned his heart was bothering him. By the 300-volt switch, the now screaming learner mentioned, for the third and final time, he was having heart problems. After the 345-volt switch, the learner went silent, implying he had, at least, been rendered unconscious. To this, the experimenter urged the teacher to continue. Thealso obtained a 65 per cent completion rate ( Russell 2018, pp. 21–22 ).

This baseline—termed thecondition ( Milgram 1974, p. 32 )—was actually the first of a set of four experiments termed the. Milgram hypothesised, and theconfirmed, the more the “shocked” learner could be indirectly heard (wall banging), directly heard (yelping/screaming), directly seen/heard (teacher and learner in the same room), and finally touched (teacher is instructed to force the learner’s hand on an electrified plate), the lower the completion rate—65, 62.5, 40, and 30 per cent, respectively.

If a teacher reached the 300-volt switch, the learner would kick the wall several times, and thereafter fail to answer further questions. This silence implied that something had gone terribly wrong. The unperturbed experimenter instructed the teacher to treat all subsequent unanswered questions as incorrect and to inflict further intensifying shocks. When a teacher expressed concerns about the learner’s wellbeing the experimenter would state, “Whether the learner likes it or not, you must go on until he has learned the word pairs correctly. So please go on” (followed by Prods 2, 3, and 4, if necessary). If a teacher tried to clarify the lines of responsibility, the experimenter asserted, “I’m responsible for anything that happens to him. Continue please” (see Russell 2018, p. 21 ). In the absence of any categorical acts of defiance, the experiment was deemed complete after the teacher administered three successive 450-volt shocks. This baseline experiment, first published in 1963, produced a 65 per cent completion rate.

Upon starting, the learner frequently failed to correctly answer the teacher’s questions. Due to the requirement to inflict increasingly intense shocks, compliant teachers quickly advanced up the switchboard. Any signs of a teacher hesitating to inflict the shocks led to the pushy experimenter responding with one or several of the following prods:

In Milgram ’s ( 1963 ) first official baseline condition, an actor posing as a potential participant entered a laboratory and encountered an experimenter wearing a grey lab-coat, who worked on behalf of Dr Milgram, a Yale professor. The ostensible participant (an actor) was then introduced to a waiting naïve, and actual, participant. The experimenter told both the actual and supposed participant that the experiment they volunteered to take part in was investigating the effects of punishment on learning. They were then told that one person was required to be the teacher and the other the learner. The selection, however, was rigged to ensure that the actor was always the learner, and the actual participant the teacher. The actual participant (now teacher) watched as the experimenter strapped the learner to a chair and attached an electrode to his arm. The experimenter informed the learner that the teacher would, from another room, ask him word-pair questions. The learner was to try and correctly answer these questions by pressing one of four switches on a device that electronically transmitted their answer to the teacher. Then, the experimenter took the teacher to an adjacent room and placed them before a shock generator. This device had 30 switches aligned in 15-volt increments from 15 to 450 volts. The experimenter then instructed the teacher to give the learner a shock for each incorrect answer proffered. Each incorrect answer also warranted for the learner a shock one level higher than its predecessor. In actuality, no shocks were administered.

2. The Obedience Studies Reinterpreted

had to “maximize obedience” (quoted in Milgram, who was Jewish, was both intrigued and horrified by the Holocaust. He wondered what would happen if, as ordinary Germans did during World War Two, average Americans were ordered to hurt others. He invented a rudimentary psychology experiment where participants were instructed to inflict punishment in the form of electric “shocks” on an actor pretending to be an incompetent learner. To determine the research idea’s potential, Milgram tasked his students with running a pilot study (see Russell 2018, pp. 61–64 ). During this first pilot, many participants across a few conditions inflicted every shock asked of them. It was then that Milgram likely sensed enormous potential in his research idea and thereafter decided to run an official research programme that had two main goals. First, because nobody would be surprised by an experiment that obtained a low rate of obedience to hurt an innocent person, Milgram knew his first official baseline experiment had to create what he termed “the strongest obedience situation”—itto “maximize obedience” (quoted in Russell 2011, pp. 149, 158 ). Second, to unravel why most participants in the first baseline inflicted every shock, as mentioned, he planned to undertake a variety of slight baseline variations. Through a gradual process of elimination, Milgram hoped these variations would lead him towards a theory capable of explaining why most participants completed the baseline.

To achieve his goals, Milgram needed help in the form of financial backing, professional laboratory facilities, technical equipment, and numerous technicians, research assistants, and actors. Converting the student-run pilots into an official research programme was a massive logistical undertaking. To achieve his goals, Milgram had to design a meticulous and well-coordinated participant-processing and data-collecting organisational process where, across many months, all helpers sequentially performed their specialist roles.

Thus, Milgram’s strategy to ensuring all helpers performed their specialist roles involved him both easing their individual fears and also appealing to their varied self-interested needs and desires. Whether they were an individual or institution, Milgram applied what he anticipated to be the most likely successful individually tailored motivational formula—quid pro quo arrangements where benefits are provided in exchange for services rendered. In the end, Milgram and his helpers, armed with similar rationalisations, collectively resolved the moral dilemma over whether to participate in a potentially harmful study. They all did so by being convinced and/or opportunistically tempted into making their essential specialist contributions to the Obedience Study’s participant-processing data-collecting organisational process ( Russell 2018, pp. 155–98 ).

One might suspect that Milgram and his helpers would have felt anxious over the possibly of harming innocent people, especially after weighing this cost up against their mere “scientific,” personal, or organisational benefits. This is especially so considering that both during and soon after the official data collection process, at least two stressed participants informed Milgram they thought they were going to have—or perhaps had—a heart attack (see Russell 2018, p. 117 ). However, alleviating such concerns was that as Milgram drew all his helpers into fulfilling their specialist roles, the issue of individual responsibility for harm-infliction underwent a subtle yet powerful transformation. That is, when Milgram and all his helpers agreed to perform their specialist roles, they unwittingly became links in a goal-directed assembly line-like bureaucratic process.

To clarify, intrinsic to bureaucracy is the division of labour, which is where an organisational goal is subdivided into numerous tasks. These numerous tasks are then performed by a variety of specialist functionary helpers ( Ritzer 1996, p. 18 ; see also Bauman 1989 ). For functionaries, however, this compartmentalisation can promote a disjuncture between cause (e.g., making partial contributions to the achievement of Milgram’s goals) and any negative effects generated by goal achievement (e.g., the infliction of intense stress on participants). This disjuncture between cause and effect among functionaries can stimulate what Russell and Gregory term “responsibility ambiguity” ( Russell and Gregory 2015, p. 136 ). Responsibility ambiguity is a metaphorical haziness, which renders debatable who exactly is aware of and responsible for any harm inflicted by the organisational process. Responsibility ambiguity also makes it difficult for arbiters to later determine who should be held to account for such harmful outcomes.

This metaphorical haziness produces two main types of functionaries. The first type of functionary is genuinely unaware of their personal responsibility in contributing to a harmful outcome because they remain structurally oblivious of the eventually destructive consequences (like, for example, the engineer who constructed the shock machine and whom, in all likelihood, remained unaware of what Milgram planned to do with it). The second type of functionary, however, is aware that harm may be inflicted but opportunistically decides to make their partial contributions to the wider process because of a suspicion they will be rewarded in the short term and never punished in the long term. That is, hidden within the fog of responsibility, the second type of functionary chooses to contribute to harm infliction because they sense that should anyone later question them about their unethical decision to proceed, they can always claim to be the first type of functionary—they (apparently) did not know about harm-infliction or they (apparently) did not believe any harm was inflicted (SRM). Below we will argue that Milgram and the NSF were examples of second-type functionaries.

In addition, because so many other functionaries are involved in contributing to the wider process, second-type functionaries can also—if they so choose—draw on other powerful sources of responsibility ambiguity like, for example, relying on the diffusion or displacement of responsibility. To clarify, the diffusion of responsibility is where all functionaries across the organisational process make partial contributions to a harmful outcome and, as a result, second-type functionaries feel only fractionally responsible for the harm inflicted. Because no single contributor feels most responsible for the harm inflicted, ultimate responsibility gets “diluted” across all contributors. With everyone just a tiny bit responsible, the diffusion of responsibility reduces the psychological strain felt by any one second type of functionary (SRM).

Relationship condition, which was never mentioned in any of his research proposals). As with many leaders of destructive organisations, to some degree they (apparently) never knew what their underlings were up to. Perhaps in the end the most blameworthy entity was (rather conveniently) the reified ideological pursuit of “scientific knowledge” (SRM). The displacement of responsibility is where second-type functionaries choose to “pass the buck” of responsibility for their contributions elsewhere (SRM) ( Russell and Gregory 2015, p. 136 ). This passing of the buck is possible because so many other functionaries are involved in contributing to goal achievement. For example, had a participant been seriously injured, Williams—the inflictor of that stress—could blame Milgram: the experimenter was only following his boss’s instructions. However, Milgram, the principle investigator, was only undertaking the kind of ground-breaking research that Yale University pressured non-tenured faculty into pursuing—he too was only doing his job. Perhaps the NSF committee or Chairman Buxton (Milgram’s boss) was most responsible—they were the highest-ranking functionaries that allowed the research to proceed. The NSF committee and Buxton, however, never directly hurt anybody. Furthermore, these high-ranking figures could add that Milgram never informed them of his intentions to run particularly unethical variations (for example, Milgram independently deciding to run thecondition, which was never mentioned in any of his research proposals). As with many leaders of destructive organisations, to some degree they (apparently) never knew what their underlings were up to. Perhaps in the end the most blameworthy entity was (rather conveniently) the reified ideological pursuit of “scientific knowledge” (SRM). 5

The point is, for opportunistic functionaries working across a malevolent bureaucracy, there is a sense that should anyone later question their decisions to contribute, they can claim plausible deniability—they did not know or believe harm was being inflicted. If indisputable evidence undermines the truthfulness of such denials, because of the difficultly in identifying any one person as ultimately responsible, all can claim to be a tiny bit responsible or simply blame someone (or something else) as more responsible for their eventually harmful contributions. Ultimately, because “others” were involved, many Obedience-Study functionaries likely sensed that they could probably continue making (and benefiting from) their individual contributions to the wider organisational process and, even if a participant was seriously harmed, they could do so with probable impunity. Feeling they were “covered” likely explains why Milgram and his helpers risked partaking in such dangerous research (see Russell 2018, pp. 166, 173, 175 ).

A second factor that likely enhanced feelings of responsibility ambiguity and thus encouraged all helpers to make their harmful contributions was that none had to directly (physically) inflict “harm.” This included the coercive experimenter who delivered participant stress by way of mere words.

A third factor that likely enhanced feelings of responsibility ambiguity and helped push and pull all helpers into role fulfilment is termed bureaucratic momentum ( Russell 2018, p. 179 ). Bureaucratic momentum takes hold when functionaries experience real or imagined pressure to perform their specialist roles by preceding and sometimes succeeding functionary links in the organisational chain. This coercive force appears to be generated by the cumulative momentum of the many simultaneously moving functionary cogs bearing down and exerting pressure on every singular cog—experienced in the form of group pressure whereby “to get along” individuals feel the push to “go along” (BF). For example, workers on an assembly line often feel pressured into quickly fulfilling their specialist roles so co-workers can perform their roles. The pressure of bureaucratic momentum to, for example, contribute to harm-infliction is difficult to resist because a potentially uncooperative functionary must: (1) sacrifice whatever self-interested benefits were on offer in exchange for role performance; and (2) be willing to deprive other (potentially belligerent) functionaries from obtaining whatever benefits they anticipated receiving for contributing to goal achievement. It is easier if all functionaries just do their bit for organisational goal achievement. Bureaucratic momentum can enhance feelings of responsibility ambiguity (thus reduce strain) across each link in the division of labour because if individuals feel pressured by other functionaries (or even structures as, for example, the set speed of an assembly-line process) into fulfilling their roles, then these individuals can be tempted into blaming those others (and other structures) as most responsible for their actions (SRM).

During the Obedience Studies, it is likely that Milgram and all his helpers felt the push and pull of bureaucratic momentum. For example, to please his generous NSF funders, Milgram felt pressure to collect a full set of data. Collecting a full dataset, however, required the long-term retention of the experimenter’s acting services. In return for longer-term employment, Williams likely felt contractually obliged to continue placing participants under enormous stress (see Russell 2018, pp. 179–259 ).

Remote-Feedback baseline condition (learner banged wall), they became more amenable or desensitised to undertaking the fifth and far more radical New Baseline (learner with heart condition screams in agony). With the entire research team having agreed to undertake the more radical New Baseline , they were more amenable to helping run the final Relationship condition (participant pushed into inflicting agonising “shocks” on someone they knew). The point being, the slippery slope of the foot-in-the-door phenomenon—small and barely perceivable steps in an increasingly radicalised direction—likely had both a strain-resolving and binding influence on all those working within the Obedience Study organisational chain. Finally, the foot-in-the-door phenomenon also likely helped reduce Milgram and his helpers’ feelings of anxiety over their contributions to a potentially harmful study (SRM/BF). The foot-in-the-door phenomenon is where persons are more likely to agree to a significant request if it is preceded by a comparatively insignificant request ( Freedman and Fraser 1966 ). For example, it could be argued that, after Milgram and his research team agreed to undertake the first official and relatively benignbaseline condition (learner banged wall), they became more amenable or desensitised to undertaking the fifth and far more radical(learner with heart condition screams in agony). With the entire research team having agreed to undertake the more radical, they were more amenable to helping run the finalcondition (participant pushed into inflicting agonising “shocks” on someone they knew). The point being, the slippery slope of the foot-in-the-door phenomenon—small and barely perceivable steps in an increasingly radicalised direction—likely had both a strain-resolving and binding influence on all those working within the Obedience Study organisational chain.

In summary, it can be argued that a morally inverted and ideologically essential “scientific” rationale for inflicting harm (SRM), personal/organisational benefits (BF), the option of plausible deniability or the diffusion/displacement of responsibility (SRM), an indirect means of harm-infliction (SRM), bureaucratic momentum (SRM/BF), and the foot-in-the-door phenomenon (SRM/BF) all likely contributed to Milgram and his helpers’ decision to collect a full set of ethically questionable data.

New Baseline ? Before it is possible to address this question, it is important to note that participants were the last functionary link in Milgram’s wider data-collecting organisational chain. Because of this last-link position, Why, then, did most participants complete the? Before it is possible to address this question, it is important to note that participants were the last functionary link in Milgram’s wider data-collecting organisational chain. Because of this last-link position, Russell and Gregory 2015 ) argued that participants were therefore susceptible to the same kinds of pushes (BFs) and pulls (SRMs) that affected all the other functionary links further up the bureaucratic chain.

For example, when the participant entered the laboratory the experimenter attempted to convince them that shocking an innocent person was of great scientific importance because doing so would help unravel the effects of punishment on learning ( Milgram 1974, p. 18 ). Although this was a slightly different strain-resolving rationale to that which Milgram provided to his helpers, it was similar in that the ideological pursuit of scientific discovery was deployed whereby something evil (harming an innocent person) was morally inverted into something good (serving science) ( Milgram 1974, p. 187 ) (SRM).

At the start of the experiment, nearly every participant inflicted the first six relatively light shocks (15–90 volts). However, doing so saw them fulfil the most important criterion of the foot-in-the-door phenomenon (SRM/BF): compliance with one or several small requests which, unbeknownst to them, were to be followed by some far greater ones. For participants—somewhat as it did for Milgram and his helpers—the foot-in-the-door phenomenon likely had two important consequences:

(a) it engages subjects in committing precedent-setting acts of obedience before they realize the ‘momentum’ which the situation is capable of creating, and the ‘ugly direction’ in which that momentum is driving them; and (b) it erects and reinforces the impression that quitting at any particular level of shock is unjustified (since consecutive shock levels differ only slightly and quantitatively). Gilbert 1981, p. 692)

Across many small 15-volt steps, most participants ended up inflicting intensifying “shocks.”

If the foot-in-the-door technique failed and a participant hesitated to inflict further shocks, the experimenter unleashed his barrage of binding prods: “It is absolutely essential that you continue,” and such. This pressure to inflict further shocks was arguably an extension of the bureaucratic momentum working its way down the organisational chain—goal achievement required all helpers fulfil their specialist roles (SRM/BF).

indirect inflictors of harm—they “only” flicked switches ( direct harm-infliction—an outcome that, among every helper, greatly advanced feelings of responsibility ambiguity. One particularly powerful SRM that made it psychologically much easier for participants to fulfil their specialist role was the research programme’s specific means of inflicting harm. Because the shock machine directly inflicted the “painful” blows, much like all the other functionary links further up the organisational chain, participants were also, technically speaking,inflictors of harm—they “only” flicked switches ( Russell and Gregory 2015, pp. 143–46 ). Thus, and it is no coincidence, every human link across Milgram’s organisational chain avoided the stressful act ofharm-infliction—an outcome that, among every helper, greatly advanced feelings of responsibility ambiguity.

feeling they were engaged in wrongdoing, because the experimenter was explicitly demanding they continue, the participants may not have appeared to others present as the person most responsible for the learner’s pain. Thus, confrontation-adverse participants were encouraged to suspect they could displace responsibility for their actions on to the experimenter because the latter said, and the former wanted to believe, it was “essential” to continue, that they had “no choice” but to do so, and that only the experimenter was “responsible.” It is likely that this last prod proved extremely tempting for many participants because appearing to believe that only the experimenter was responsible not only enabled them to avoid a confrontation, it also absolved them from moral and legal culpability for continuing (see As participants contemplated the ugly direction they were headed, a tempting opportunity emerged: if they unquestionably did as the experimenter asked, then participants could blame the experimenter for their shock-inflicting actions ( Eckman 1977, p. 97 ). That is, the prods likely tempted many participants to suspect that, despitethey were engaged in wrongdoing, because the experimenter was explicitly demanding they continue, the participants may not haveto others present as the person most responsible for the learner’s pain. Thus, confrontation-adverse participants were encouraged to suspect they could displace responsibility for their actions on to the experimenter because the latter said, and the former wanted to believe, it was “essential” to continue, that they had “no choice” but to do so, and that only the experimenter was “responsible.” It is likely that this last prod proved extremely tempting for many participants because appearing to believe that only the experimenter was responsible not only enabled them to avoid a confrontation, it also absolved them from moral and legal culpability for continuing (see Russell 2018, pp. 199–230 ). When participants capitalised on the responsibility ambiguity inherent within this situation by “passing the buck,” this helped reassure many of them that—similar to all the other second type functionaries involved—they could probably personally benefit from their individual contributions to the system. Most importantly, they could probably also do so with impunity. Effectively, the prods planted a dark seed in the participants’ mind: they gifted them with a credible and tempting excuse for their decision to continue engaging in wrongdoing ( Russell and Gregory 2015, p. 142 ).

Therefore, participants could either: (1) critically rebuke the experimenter and refuse to inflict further shocks; or (2) accept the experimenter’s assurances that they could continue shocking the learner and not only avoid a confrontation (benefit), but do so with probable impunity. When participants contemplated the latter option, the psychological noose inherent in the experimental procedure tightened. Faced by this dilemma, some participants refused to continue, but most chose what for them was the easier self-interestedly “beneficial” option: inflict more shocks.

It is difficult for observers to comprehend how most participants could turn their backs on the learner, and do so for such a trivial reason. However, because observers are outside this situation, they are typically oblivious of the basic procedure’s many subtle manipulative forces. For example, observers are often insensitive to the reality that: (1) all the symbols of power—the “ Yale ”-sponsored experimenter, “ Dr ” Milgram and his prestigious “ Ivy League ” institution—bolstered the perception that harming an innocent person was normative “model” behaviour; (2) because “obedient” participants are encouraged to feel they are free to pursue wrongdoing, they are led to suspect that only they will ever know about their immoral decision to prioritise their lesser important desires over the learner’s clearly more important needs; (3) there were many small steps that gradually and imperceptibly lured participants well beyond what felt like a point of no return; and (4) the wider organisational process was structured in a way whereby—similar with every other functionary helper involved—unethical choices always felt easy and ethical ones personally burdensome.

That Milgram abused his power became clearer after the publication of the first baseline experiment in 1963. In it he alluded to the possibility that he had captured elements of the Holocaust in the laboratory setting, revealing results he expected would impress the scholarly community. However, the first scholarly response to Milgram’s publication was Baumrind ’s ( 1964 ) scathing ethical critique—Milgram, she claimed, had used and abused his innocent participants perhaps, she implies, to benefit his academic career. Baumrind bolstered her point about participant abuse by citing a quote from Milgram’s article:

I observed a mature and initially poised businessman enter the laboratory smiling and confident. Within 20 min he was reduced to a twitching, stuttering wreck, who was rapidly approaching a point of nervous collapse. Milgram 1963, p. 377)

Relationship condition, after the publication of her critique, he never again mentioned the variation’s existence ( Another omission was that, although pre-Baumrind (1964) Milgram promised to publish thecondition, after the publication of her critique, he never again mentioned the variation’s existence ( Russell 2014b ). Had Baumrind caught wind of this condition—where, for example, a father was pushed into inflicting shocks on his yelping son—one can only imagine the ethical firestorm she would have unleashed on him. In his book’s draft notes Milgram justified his (mis)treatment of participants by arguing that the research’s enormous benefits—greater knowledge—outweighed the costs:

Under what conditions does one ask about destructive obedience? Perhaps under the same conditions that a medical researcher asks about cancer or polio; because it is a threat to human welfare and has shown itself a scourage [ sic ] to humanity. (quoted in Russell 2018, p. 118

Of course, if the purist acquisition of knowledge was his greatest priority, why did he fail to publish the Relationship condition—an experiment that might aid in his (apparently) all-important pursuit for greater knowledge? In Milgram’s private notes, he admitted to some regrets:

… considered as a personal motive of the author—the possible benefits that might redound to humanity—withered to insignificance alonisde [sic] the strident demands of intellectual curiosity. When an investigator keeps his eyes open throughout a [scientific] study, he learns things about himself as well as about his subjects, and the observations do not always flatter. (quoted in Russell 2018, p. 178