Luke/SI asked me to look into what the academic literature might have to say about people in positions of power. This is a summary of some of the recent psychology results.

The powerful or elite are: fast-planning abstract thinkers who take action (1) in order to pursue single/minimal objectives, are in favor of strict rules for their stereotyped out-group underlings (2) but are rationalizing (3) & hypocritical when it serves their interests (4), especially when they feel secure in their power. They break social norms (5, 6) or ignore context (1) which turns out to be worsened by disclosure of conflicts of interest (7), and lie fluently without mental or physiological stress (6).

What are powerful members good for? They can help in shifting among equilibria: solving coordination problems or inducing contributions towards public goods (8), and their abstracted Far perspective can be better than the concrete Near of the weak (9).

Galinsky et al 2003; Guinote, 2007; Lammers et al 2008; Smith & Bargh, 2008 Eyal & Liberman Rustichini & Villeval 2012 Lammers et al 2010 Kleef et al 2011 Carney et al 2010 Cain et al 2005; Cain et al 2011 Eckel et al 2010 Slabu et al; Smith & Trope 2006; Smith et al 2008

These benefits may not exceed the costs (is inducing contributions all that useful with improved market mechanisms like assurance contracts - made increasingly famous thanks to Kickstarter?) Now, to forestall objections from someone like Robin Hanson that these traits - if negative - can be ameliorated by improved technology and organizations and the rest just represents our egalitarian forager prejudice against the elites and corporations who gave us the wealthy modern world, I would point out that these traits look like they would be quite effective at maximizing utility and some selected for in future settings…

(Additional cautions include that, in order to control for all sorts of confounds, these are generally small WEIRD samples in laboratory or university settings involving small-scale power shifts, priming, or other cues; as such, all the usual criticisms apply.)

“Power increases hypocrisy: Moralizing in reasoning, immorality in behavior”, Lammers et al 2010; warning, Stapel! But Lammers says committee cleared this paper.

In five studies, we explored whether power increases moral hypocrisy (i.e., imposing strict moral standards on other people but practicing less strict moral behavior oneself). In Experiment 1, compared with the powerless, the powerful condemned other people’s cheating more, but also cheated more themselves. In Experiments 2 through 4, the powerful were more strict in judging other people’s moral transgressions than in judging their own transgressions. A final study found that the effect of power on moral hypocrisy depends on the legitimacy of the power: When power was illegitimate, the moral-hypocrisy effect was reversed, with the illegitimately powerful becoming stricter in judging their own behavior than in judging other people’s behavior. This pattern, which might be dubbed hypercrisy, was also found among low-power participants in Experiments 3 and 4. We discuss how patterns of hypocrisy and hypercrisy among the powerful and powerless can help perpetuate social inequality. …feelings of power reduce sensitivity to social disapproval (Emerson, 1962; Thibaut & Kelley, 1959), thus reducing the grip of social norms and standards on power holders’ behavior (Galinsky et al., 2008). As a result, even very strong norms, such as those regulating sexual behavior or compassion, are often ignored by the powerful (Bargh, Raymond, Pryor, & Strack, 1995; Van Kleef et al., 2008). Emerson, R.M. (1962). Power-dependence relations. American Sociological Review, 27, 31–41

Thibaut, J.W., & Kelley, H.H. (1959). The social psychology of groups. New York: Wiley & Sons

Galinsky, A.D., Magee, J.C., Gruenfeld, D.H, Whitson, J., & Liljenquist, K.A. (2008). Social power reduces the strength of the situation: Implications for creativity, conformity, and dissonance. Journal of Personality and Social Psychology, 95, 1450–1466 Powerful people who feel that their position is illegitimate are less inclined to assertively take what they want (Lammers, Galinsky, Gordijn, & Otten, 2008) and at the same time are less inclined to judge others for doing so, compared with people who feel their power is deserved (Chaurand & Brauer, 2008). Therefore, in our final study, we independently manipulated power and its legitimacy to test whether legitimacy crucially moderates the effect of power on hypocrisy. Lammers, J., & Stapel, D.A. (2009). How power influences moral thinking. Journal of Personality and Social Psychology, 97, 279–289

Chaurand, N., & Brauer, M. (2008). What determines social control? People’s reactions to counternormative behaviors in urban environments. Journal of Applied Social Psychology, 38, 1689–1715

“Moral Hypocrisy, Power and Social Preferences”, Rustichini & Villeval 2012:

We show with a laboratory experiment that individuals adjust their moral principles to the situation and to their actions, just as much as they adjust their actions to their principles. We first elicit the individuals’ principles regarding the fairness and unfairness of allocations in three different scenarios (a Dictator game, an Ultimatum game, and a Trust game). One week later, the same individuals are invited to play those same games with monetary compensation. Finally in the same session we elicit again their principles regarding the fairness and unfairness of allocations in the same three scenarios. Our results show that individuals adjust abstract norms to fit the game, their role and the choices they made. First, norms that appear abstract and universal take into account the bargaining power of the two sides. The strong side bends the norm in its favor and the weak side agrees: Stated fairness is a compromise with power. Second, in most situations, individuals adjust the range of fair shares after playing the game for real money compared with their initial statement. Third, the discrepancy between hypothetical and real behavior is larger in games where real choices have no strategic consequence (Dictator game and second mover in Trust game) than in those where they do (Ultimatum game). Finally the adjustment of principles to actions is mainly the fact of individuals who behave more selfishly and who have a stronger bargaining power. …Individuals destroy the resources of others because of envy (Mui, 1995; Maher, 2010; Charness et al., 2010; Harbring and Irlensbusch, 2011) or for the joy of destruction (Zizzo and Oswald, 2001; Abbink and Sadrieh, 2009); the power of public office sometimes leads politicians to use it for their personal gain (Aidt, 2003); feelings of entitlement push leaders to take more than followers from a common resource (de Cremer and van Dijk, 2005). Mui, V.L. (1995). The economics of envy. Journal of Economic Behavior & Organizations, 26(3), 311-336.

Maher, B. (2010). Research Integrity: Sabotage! Nature, 467, 30 September, 516-518.

G. Charness, D. Masclet, M.C. Villeval. (2010). Competitive Preferences and Status as an Incentive: Experimental Evidence. IZA Discussion Paper 5034, Bonn

Harbring, C., Irlensbusch, B. (2011). Sabotage in Tournaments: Evidence from the Laboratory, Management Science, 57(4), 611-627

Zizzo, D., Oswald, A.J. (2001). Are People Willing to Pay to Reduce Others’ Incomes? Annales d’Economie et de Statistique, 63-64, 39-62

Abbink, K., Sadrieh, A. (2009). The pleasure of being nasty. Economics Letters, 105(3), 306-308.

Aidt, T.S. (2003). Economic Analysis of Corruption: A Survey. The Economic Journal, 113, F632-F652.

de Cremer, D., van Dijk, E. (2005). When and why leaders put themselves first: Leader behaviour in resource allocations as a function of feeling entitled. European Journal of Social Psychology, 35, 553-563. social psychologists studying moral hypocrisy have shown that individuals evaluate more negatively the moral transgression of fair principles when this transgression is enacted by others than when enacted by themselves (Valdesolo and deStefano, 2008). Valdesolo, P., deStefano, D.A. (2008). The duality of virtue: Deconstructing the moral hypocrite. Journal of Experimental Social Psychology, 44 (5), 1334-1338. Such an illusory preference for fairness has been identified by Dana, Weber and Kuang (2007) (see also Larson and Capra, 2009; Grossman, 2010; van der Weele, 2012). Indeed, fairness decreases substantially when the link between fairness and outcome is obfuscated. The choice to play fair is frequently motivated by the willingness to appear fair more than by the willingness to produce a fair outcome and this is why greater anonymity leads to more selfish transfers in the dictator game (Andreoni and Bernheim, 2009; Ariely et al., 2009). Dana, J., Weber R.A., Xi Kuang, J. (2007). Exploiting moral wiggle room: experiments demonstrating an illusory preference for fairness. Economic Theory, 33(1), 67-80

Larson, T., Capra, M. (2009). Exploiting moral wiggle room: Illusory preference for fairness? A comment. Judgment and Decision Making, 4(6), 467-474

Grossman, Z. (2010). Strategic ignorance and the robustness of social preferences. Working paper, University of California at Santa Barbara

Van der Weele, J. (2012). When ignorance is innocence: on information avoidance in moral dilemmas. SSRN working paper.

Andreoni, J., Bernheim, B.D. (2009). Social Image and the 50-50 Norm: A Theoretical and Experimental Analysis of Audience Effects. Econometrica, 77(5), 1607-1636

Ariely, D., Bracha, A., Meier, S. (2009). Doing Good or Doing Well? Image Motivation and Monetary Incentives in Behaving Prosocially. American Economic Review, 99(1), 544-555

“Breaking the Rules to Rise to Power: How Norm Violators Gain Power in the Eyes of Others”, Kleef et al 2011:

Four studies support this hypothesis. Individuals who took coffee from another person’s can (Study 1), violated rules of bookkeeping (Study 2), dropped cigarette ashes on the floor (Study 3), or put their feet on the table (Study 4) were perceived as more powerful than individuals who did not show such behaviors. The effect was mediated by inferences of volitional capacity, and it replicated across different methods (scenario, film clip, face-to-face interaction), different norm violations, and different indices of power (explicit measures, expected emotions, and approach/inhibition tendencies). …‘‘Power tends to corrupt, and absolute power corrupts absolutely,’’ wrote Lord Acton to Bishop Mandell Creighton in 1887. This classic adage not only reflects popular sentiments about power; it is also supported by scientific research (e.g., Kipnis, 1972). Kipnis, D. (1972). Does power corrupt? Journal of Personality and Social Psychology, 24, 33-41 Individuals who feel powerful are more likely to act in goal-congruent ways (e.g., by switching off an annoying fan) than those who feel less powerful (Galinsky, Gruenfeld, & Magee, 2003). Powerful individuals are also more likely to take risks (Anderson & Galinsky, 2006), show approach-related tendencies and goal-directed action (Guinote, 2007; Lammers, Galinsky, Gordijn, & Otten, 2008; Smith & Bargh, 2008), express their emotions (Hecht & Lafrance, 1998), act based on their dispositional inclinations (Chen, Lee-Chai, & Bargh, 2001) and momentary desires (Van Kleef & Cote, 2007), and ignore situational pressures (Galinsky et al., 2008). Galinsky, A. D., Gruenfeld, D. H., & Magee, J. C. (2003). From power to action. Journal of Personality and Social Psychology, 85, 453-466

Anderson, C., & Galinsky, A. D. (2006). Power, optimism and risk-taking. European Journal of Social Psychology, 36, 511-536

Guinote, A. (2007). Power and goal pursuit. Personality and Social Psychology Bulletin, 33, 1076-1087

Lammers, J., Galinsky, A. D., Gordijn, E. H., & Otten, S. (2008). Illegitimacy moderates the effects of power on approach. Psychological Science, 19, 558-564

Smith, P. K., & Bargh, J. A. (2008). Nonconscious effects of power on basic approach and avoidance tendencies. Social Cognition, 26, 1-24

Hecht, M. A., & Lafrance, M. (1998). License or obligation to smile: The effect of power and sex on amount and type of smiling. Personality and Social Psychology Bulletin, 24, 1332-1342

Chen, S., Lee-Chai, A. Y., & Bargh, J. A. (2001). Relationship orientation as a moderator of the effects of social power. Journal of Personality and Social Psychology, 80, 173-187

Van Kleef, G. A., & & Cote, S (2007). Expressing anger in conflict: When it helps and when it hurts. Journal of Applied Psychology, 92, 1557-1569

Galinsky, A. D., Gruenfeld, D. H., Magee, J. C., Whitson, J. A., & Liljenquist, K. A. (2008). Power reduces the press of the situation: Implications for creativity, conformity, and dissonance. Journal of Personality and Social Psychology, 95, 1450-1466 This behavioral disinhibition makes powerful people more likely to exhibit socially inappropriate behavior. Compared to lower power individuals, powerful individuals are likely to take more cookies from a common plate, eat with their mouths open, and spread crumbs (Keltner et al., 2003); interrupt conversation partners and invade their personal space (DePaulo & Friedman, 1998); fail to take another’s perspective (Galinsky, Magee, Inesi, & Gruenfeld, 2006); ignore other people’s suffering (Van Kleef et al., 2008); stereotype (Fiske, 1993) and patronize others (Vescio, Gervais, Snyder, & Hoover, 2005); cheat (Lammers, Stapel, & Galinsky, 2010); take credit for the contributions of others (Kipnis, 1972); treat other people as a means to their own ends (Gruenfeld, Inesi, Magee, & Galinsky, 2008); and sexualize and harass low-power women (Bargh, Raymond, Pryor, & Strack, 1995). Powerful people also exhibit more aggression (Haney, Banks, & Zimbardo, 1973), and this is relatively acceptable to others (Porath, Overbeck, & Pearson, 2008). In fact, in several European countries the liberty to violate norms without sanction is perceived as a defining feature of the power holder (Mondillon et al., 2005). Although the powerful impose strict moral standards on others, they practice less strict moral behavior themselves (Lammers et al., 2010). Keltner, D., & Gruenfeld, D. H, & Anderson, C. (2003). Power, approach, and inhibition. Psychological Review, 110, 265-284.

DePaulo, B. M., & Friedman, H. S. (1998). Nonverbal communication. In D. Gilbert, S. T. Fiske, & G. Lindzey (Eds.), Handbook of social psychology (pp. 3-40). New York: McGraw-Hill

Galinsky, A. D., Magee, J. C., Inesi, M. E., & Gruenfeld, D. H. (2006). Power and perspectives not taken. Psychological Science, 17, 1068-1074

Van Kleef, G. A., Oveis, C., Van Der Lowe, I., LuoKogan, A., Goetz, J., & Keltner, D. (2008). Power, distress, and compassion: Turning a blind eye to the suffering of others. Psychological Science, 19, 1315-1322

Fiske, S. T. (1993). Controlling other people: The impact of power on stereotyping. American Psychologist, 48, 621-628

Lammers, J., Stapel, D. A., & Galinsky, A. D. (2010). Power increases hypocrisy: Moralizing in reasoning, immorality in behavior. Psychological Science, 21, 737-744; WARNING: Stapel! Lammers states that this paper is untainted: IMPORTANT: Regarding the scientific fraud of my former supervisor Stapel: the committee Levelt has investigated all my work with Stapel. All my work on the topic of power has been cleared from suspicion of data-fraud. This research is all based on data that I collected myself or collected together with other co-authors (i.e. not Stapel). There is one paper (on racism in legal decisions) where I was misled. This paper contains false data. It is currently being retracted. Gruenfeld, D. H., Inesi, M. E., Magee, J. C., & Galinsky, A. D. (2008). Power and the objectification of social targets. Journal of Personality and Social Psychology, 95, 111-127

Bargh, J. A., Raymond, P., Pryor, J. B., & Strack, F. (1995). Attractiveness of the underling: An automatic power-sex association and its consequences for sexual harassment and aggression. Journal of Personality and Social Psychology, 68, 768-781

Haney, C., Banks, C., & Zimbardo, P. (1973). Interpersonal dynamics in a simulated prison. International Journal of Criminology and Penology, 1, 69-97

Porath, C. L., Overbeck, J., & Pearson, C. M. (2008). Picking up the gauntlet: How individuals respond to status challenges. Journal of Applied Social Psychology, 38, 1945-1980

Mondillon, L., Niedenthal, P. M., Brauer, M., Rohman, A., Dalle, N., & Uchida, Y. (2005). Beliefs about power and its relation to emotional experience: A comparison of Japan, France, Germany, and United States. Personality and Social Psychology Bulletin, 31, 1112-1122 ...research on adolescent aggression indicates that bullying behavior is associated with prestige (Savin-Williams, 1976; Sijtsema, Veenstra, Lindenberg, & Salmivalli, 2009). Savin-Williams, R. C. (1976). An ethological study of dominance formation and maintenance in a group of human adolescents. Child Development, 47, 972-979

Sijtsema, J. J., Veenstra, R., Lindenberg, S., & Salmivalli, C. (2009). Empirical test of bullies’ status goals: Assessing direct goals, aggression, and prestige. Aggressive Behavior, 35, 57-67

“Morality and Psychological Distance: A Construal Level Theory Perspective”, Eyal & Liberman:

In this chapter, we propose one answer to the question of when values and moral principles play a central role in people’s judgments and plans. We explore the possibility that values and moral principles are more prominent in judgments and predictions regarding psychologically more distant events. This perspective is based on construal level theory (CLT; Liberman & Trope, 2008; Liberman, Trope, & Stephan, 2007; Trope & Liberman, in press), according to which the construal of psychologically more distant situations highlights more abstract, high-level features. Because values and moral rules tend to be abstract and general, people are more likely to use them in construing, judging, and planning with respect to psychologically more distant situations. For example, Nussbaum, Trope, and Liberman (2003, Study 2) conceptualized personal dispositions as high-level construals and situational constrains as low-level construals and demonstrated that people expect others to express their personal dispositions and act consistently across different situations in the distant future more than in the near future. In the study, participants imagined an acquaintance’s behavior in four different situations (e.g., a birthday party, waiting in line at the supermarket) in either the near future or the distant future and rated the extent to which the acquaintance would display 15 traits (e.g., behave in a friendly vs. an unfriendly manner) representative of the Big Five personality dimensions (extraversion, agreeableness, conscientiousness, emotional stability, and intellect). Cross- situational consistency was assessed by computing, for each of the 15 traits, the variance in each predicted behavior across the four situations and the correlations among the predicted behaviors in the four situations. As predicted, participants expected others to behave more consistently across distant-future situations than across near-future situations. This finding was replicated with ratings of participants’ own behavior in different situations: Participants anticipated exhibiting more consistent traits in the distant future than in the near future (Wakslak, Nussbaum, Liberman, & Trope, 2008, Study 5). Nussbaum, S., Trope, Y., & Liberman, N. (2003). “Creeping dispositionism: The temporal dynamics of behavior prediction”. Journal of Personality and Social Psychology, 84, 485-497

Wakslak, C. J., Nussbaum, S., Liberman, N., & Trope, Y. (2008). “Representations of the self in the near and distant future”. Journal of Personality and Social Psychology, 95, 757-773 For each scenario (e.g., national flag), participants chose between two restatements of each action. One restatement referred to an abstract moral principle (high-level construal; e.g., desecrating a national symbol) and the other restatement referred to the means of carrying out the action (low-level construal; e.g., cutting a flag to created rags). We found that distant-future transgressions were identified in moral terms more often than near-future transgressions. These findings suggest that people are more likely to think of a temporally distant action, rather than one in the near term, as having moral implications. CLT predicts similar results for other forms of psychological distance: Situations should be more readily construed in terms of moral principles when they occurred further back in the past, when they apply to more socially or spatially distant individuals or groups, and when they are less likely actually to occur. When the same actions are proximal, they are more likely to be construed in terms that are devoid of moral implications. For example, accepting minority students with lower grades into one’s university will be seen as “endorsing affirmative action” when it is unlikely to be implemented, but it will be seen in more concrete terms (e.g., as “making acceptance rules more complicated”) when it becomes more likely. The vignettes also included situational details that rendered the transgressions harmless (low-level information; e.g., the siblings used contraceptives, they had sex just once, they kept it a secret). Participants were instructed to imagine that the transgressions would occur tomorrow (the near-future condition) or next year (the distant-future condition) and judged the extent of its wrongness. We found that moral transgressions were judged more severely when imagined in the distant future compared to the near future. The same pattern occurred with social distance (Eyal et al., 2008, Study 3), which was manipulated by asking participants to focus either on the feelings and thoughts they experienced while reading about the events (low social distance) or to think about another person they knew, such as a colleague, a friend, or a neighbor, and focus on the feelings and thoughts that this person would experience while reading about the events (high social distance). Notice that the social distance manipulation did not involve judging one’s own versus another person’s actions, but only one’s imagined perspective. Notably, this manipulation does not support interpreting the results in terms of moral hypocrisy, according to which people judge their own moral transgressions less harshly than another person’s transgressions because they wish to appear better than others. As predicted, moral transgressions were judged more harshly when imagined from a third person perspective (high social distance) compared to one’s own perspective (low social distance). Another study (Eyal et al., 2008, Study 4) examined temporal distance effects on judgments of moral acts. Participants read vignettes that described virtuous acts related to widely accepted moral principles (high-level information; e.g., a couple adopting a disabled child) as well as low-level, situational details that rendered the acts less noble (e.g., the government offering large adoption payments). It was found that these behaviors were judged to be more virtuous when they were described as happening in the distant future rather than the near future. Temporal distance from moral transgressions was also found to affect people’s emotional responses. Agerstrom and Bjorklund (2009, Studies 1 and 2) asked Swedish participants to imagine situations that involved a threat to human welfare taking place in the near future (today) or in the distant future (in 30 years). For example, one scenario, set in Darfur, Africa, described a woman who was raped and beaten by the Janjaweed militia. Each scenario was followed by a description of a prosocial action that, if taken, could improve the situation (e.g., donate money). Participants rated how wrong it would be for another Swedish citizen not to take the proposed prosocial action given that they had the means to do so. They also rated how angry they would feel if the target person failed to take the prosocial action. It was found that distant-future moral failures were judged more harshly and invoked more anger than near-future moral failures. In another study, Agerstrom and Bjorklund (2009) examined whether the greater reliance on moral principles in judgments of distant-future compared to near-future transgressions would generalize to individuals’ self-perceptions. Participants rated the likelihood of engaging in prosocial actions in reaction to other people’s moral transgressions. For example, participants indicated how much money they were willing to donate to help improve the situation in Darfur. As predicted, participants were more likely to express prosocial behavioral intentions when imagining the act occurring in the more distant future. Taken together, these findings suggest that moral rules are more likely to guide people’s judgments of distant rather than proximal behaviors. Agerström, J., & Björklund, F. (2009). Temporal distance and moral concerns: Future morally questionable behavior is perceived as more wrong and evokes stronger prosocial intentions. Basic and Applied Social Psychology, 31, 49-59 For example, individuals for whom altruism was subordinate in importance to achievement were more likely to refuse to help a fellow student in the distant future than in the near future, whereas individuals for whom achievement was subordinate to altruism were more likely to help a fellow student in the distant future than in the near future. These findings show that secondary values, which are nonetheless part of an individual’s self-identity, may mask the influence of central values on near future intentions. Centrality of values may be defined not only within an individual but also within a situation. For example, when medically treating a person from a rival group in a war, the competition is central and mercy is secondary, whereas in a hospital, the reverse is true. An interesting prediction that follows from CLT is that the secondary value will guide behavioral intentions in the near future more than in the distant future. Thus, in a war, benevolence will come into play in near-future plans more than in distant-future plans, leading people to be more merciful than would otherwise be expected. In his poem “After the Battle”, Victor Hugo tells about his father (“that hero with the sweetest smile”), an officer in the war against Spain, who encounters a Spaniard soldier asking for something to drink. Although on the battlefield, and although the Spaniard tries to kill him, the officer orders: “All the same, give him something to drink.”

“People with Power are Better Liars”, Carney et al 2010:

But lying does not come without cost. Ordinary lie-tellers experience negative emotions, decrements in mental function, and physiological stress. Liars are also at risk of getting caught. Despite people’s best attempts to get away with their prevarications, lies are often behaviorally “leaked” through subtle changes in body movement and speech rate. Power, it seems, enhances the same emotional, cognitive, and physiological systems that lie-telling depletes. People with power enjoy positive emotions, increases in cognitive function (4-5), and physiological resilience such as lower levels of the stress hormone cortisol (6-7). Thus, holding power over others might make it easier for people to tell lies. D. Keltner, D.H. Gruenfeld, C. Anderson, Psychol Rev. 110, 265-284 (2003). P.K. Smith, N.B. Jostmann, A.D. Galinsky, W. van Dijk, Psychol Sci. 19, 441-447 (2008). R.M. Sapolsky, S.C. Alberts. J. Altmann, J Arch Gen Psychi. 54, 1137-1143 (1997). S. Cohen, W.J. Doyle, A. Baum, Psychosom Med. 68, 414-420 (2006) Participants were assigned to the role of “leader” or “subordinate” and engaged in a series social interactions in which the leader had control over the subordinate’s monetary and social outcomes (9)….If the individual could successfully convince the experimenter (regardless of whether they were lying) they could keep the $100 in cash. All participants were then interviewed about whether they had stolen the money: half were lying and half were telling the truth. The interviewer (blind to experimental condition) asked all participants the same critical questions (e.g., “Did you steal the $100?”; “Why should I believe you?”). After the interview, participants completed measures of moral emotional feelings (rated emotion terms: bashful, guilty, troubled, scornful) and a computerized task assessing degree of cognitive impairment. All participants provided saliva samples before and after the experiment to assess changes in the stress hormone cortisol (9). The interviews were videotaped and coded for two, classic nonverbal markers of deception: one-sided shoulder shrugs and accelerated prosody (9). Low-power individuals showed the expected emotional, cognitive, physiological, and behavioral signs of deception; in contrast, powerful people demonstrated no evidence of lying across emotion, cognition, physiology, or behavior (see Figure). In other words, power acted as a buffer allowing the powerful to lie significantly more easily (less disturbing emotion, less cognitive impairment, less of a rise in the stress hormone cortisol) and more effectively (fewer nonverbal cues associated with lying). Only low-power individuals felt badly after lying (panel A), suffered cognitive impairment (panel B), spiked in levels of the stress hormone cortisol (panel C), and demonstrated nonverbal “leakage” (more one-sided shoulder shrugs and accelerated prosody; panel D). (9)

“Psychological perspectives on the fiduciary business”, Donald C. Langevoort

But the investment game has been manipulated in numerous ways that produce differing levels of trusting and greater selfishness. One of particular interest is the introduction of the possibility that, at the end of the game, the trustor will learn whether she gets something back but will not know whether this is the result of the trustee’s choice or some exogenous force – e.g., luck.34 Given the opportunity to hide behind the possibility that a return of nothing was just bad luck for the trustor, trustees predictably keep more for themselves, presumably rationalizing the outcome as fair in an uncertain world. The authors of one such study recently drew parallels to financial relationships between investors and securities professionals, because the financial markets generate a great deal of good and bad luck that obscures the value added by professional trustworthiness.35 Radu Vranceanu et al., Trust and Financial Trades: Lessons from an Investment Game Where Reciprocators Can Hide Behind Probabilities 6 (ESSEC Bus. Sch., Working Paper No. 10007, 2010), available at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1 611666. See id. at 14-15. Unfortunately, high testosterone levels do not fit well with fiduciary characteristics like empathy and moral decision-making. Emerging research on the subject suggests that testosterone buffers emotional constraints on aggression and risk-taking, leading to a more “cold” utilitarian calculus and a greater willingness to do harm to gain a preferred outcome.49 See Dana R. Carney & Malia F. Mason, Decision Making and Testosterone: When the Ends Justify the Means, 46 J. EXPERIMENTAL SOC. PSYCHOL. 668, 668-69 (2010). As the authors point out, the ends need not necessarily be immoral. Id. at 670. Power also seems to increase hypocrisy – insistence on adherence to strict norms by others, while enjoying far greater nimbleness in justifying one’s own departures on utilitarian or other rationalized grounds52 – and optimism and risk-taking.53 Of course, power may be gained in the first place by those skilled at rationalization and willing to take risks, in which case there is a dynamic feedback loop that is likely to generate increasing hypocrisy and hubris over time. See Joris Lammers et al., Power Increases Hypocrisy: Moralizing in Reasoning, Immorality in Behavior, 21 PSYCHOL. SCI. 737, 738 (2010). See Cameron Anderson & Adam D. Galinsky, Power, Optimism, and Risk-taking, 36 EUR. J. SOC. PSYCHOL. 511, 516 (2006). In turn, this pattern may connect to testosterone or other physiological effects. See Carney & Mason, supra note 49, at 668.

“The Dirt on Coming Clean: Perverse Effects of Disclosing Conflicts of Interest”, Cain et al 2005

Although disclosure is often proposed as a potential solution to these problems, we show that it can have perverse effects. First, people generally do not discount advice from biased advisors as much as they should, even when advisors’ conflicts of interest are disclosed. Second, disclosure can increase the bias in advice because it leads advisors to feel morally licensed and strategically encouraged to exaggerate their advice even further. As a result, disclosure may fail to solve the problems created by conflicts of interest and may sometimes even make matters worse. …In the domain of medicine, for example, research shows that while many people are ready to acknowledge that doctors might generally be affected by conflicts of interest, few can imagine that their own doctors would be affected (Gibbons et al. 1998). Indeed, it is even possible that disclosure could sometimes increase rather than decrease trust, especially if the person with the conflict of interest is the one who issues the disclosure. Research suggests that when managers offer negative financial disclosures about future earnings, they are regarded as more credible agents, at least in the short term (Lee, Peterson, and Tiedens 2004; Mercer, forthcoming). Thus, if a doctor tells a patient that her research is funded by the manufacturer of the medication that she is prescribing, the patient might then think (perhaps rightly) that the doctor is going out of her way to be open or that she is “deeply involved” and thus knowledgeable. Thus, disclosure could cause the estimator to place more rather than less weight on the advisor’s advice. Third, even when estimators realize that they should make some adjustment for the conflict of interest that is disclosed, such adjustments are likely to be insufficient. As a rule, people have trouble unlearning, ignoring, or suppressing the use of knowledge (such as biased advice) even if they are aware that it is inaccurate (Wilson and Brekke 1994). Research on anchoring, for example, shows that quantitative judgments are often drawn toward numbers (the anchors) that happen to be mentally available. This effect holds even when those anchors are known to be irrelevant (Strack and Mussweiler 1997; Tversky and Kahneman 1974), unreliable (Loftus 1979), or even manipulative (Galinsky and Mussweiler 2001; Hastie, Schkade, and Payne 1999). Research on the “curse of knowledge” (Camerer, Loewenstein, and Weber 1989) shows that people’s judgments are influenced even by information they know they should ignore. And research on what has been called the “failure of evidentiary discreditation” shows that when the evidence on which beliefs were revised is totally discredited, those beliefs do not revert to their original states but show a persistent effect of the discredited evidence (Skurnik, Moskowitz, and Johnson 2002; Ross, Lepper, and Hubbard 1975). Furthermore, attempts to willfully suppress undesired thoughts can lead to ironic rebound effects, in some cases even increasing the spontaneous use of undesired knowledge (Wegner 1994). …More interesting, and as predicted, all three measures also reveal that disclosure led to greater distortion of advice. The amount that advisors exaggerated, calculated by subtracting advisors’ own personal estimates from their public suggestions, was significantly greater in the high/disclosed condition than in either of the other two conditions (p<0.05) and significantly greater by the other two measures as well: advisor suggestion minus actual jar values and advisor suggestion minus the average of personal estimates in the accurate condition (p<0.05 for both). In the accurate condition, for example, advisors provided estimators with suggestions of jar values that were, on average, within $1 of their own personal estimates. In the high/undisclosed condition, however, advisors gave suggestions that were $3.32 greater than their own personal estimates, and in the high/disclosed condition, they gave suggestions that were inflated more than twice as much, at more than $7 above their own personal estimates. Disclosure, it appears, did lead advisors to provide estimators with more biased advice. …Although disclosures did increase discounting by estimators, albeit not significantly, this discounting was not sufficient to offset the increase in the bias of the advice they received. As Table 6 (fourth row) shows, estimator discounting increased, on average, less than $2 from the accurate condition to the high/undisclosed condition and less than $2.50 from the high/undisclosed condition to the high/disclosed condition. However, Table 5 (second row) shows that suggestions increased, on average, almost $4 from the accurate condition to the high/undisclosed condition and increased $4 again from the high/undisclosed condition to the high/disclosed condition. Thus, while estimators in the high/disclosed condition discounted suggestions about $4 more than did estimators in the accurate condition, the advice given in the high/disclosed condition was almost $8 higher than advice given in the accurate condition. Instead of correcting for bias, estimates were approximately 28 percent higher in the high/disclosed condition than in the accurate condition (first row of Table 6).

“When Sunlight Fails to Disinfect: Understanding the Perverse Effects of Disclosing Conflicts of Interest”, Cain et al 2011

Studies 1 and 2 examine psychological mechanisms (strategic exaggeration, moral licensing) by which disclosure can lead advisors to give more-biased advice. Study 3 shows that disclosure backfires when advice recipients who receive disclosure fail to sufficiently discount and thus fail to mitigate the adverse effects of disclosure on advisor bias. Study 4 identifies one remedy for inadequate discounting of biased advice: explicitly and simultaneously contrasting biased advice with unbiased advice. …Even in one-shot dictator games (Forsythe et al. 1994), research has long shown that many people will share resources and show self-restraint toward anonymous others (Camerer 2003), especially when it is common knowledge that the recipient expects such benevolence (Dana, Cain, and Dawes 2006). Likewise, research on cheating behavior shows that people do not tend to cheat as much as they can get away with, only to the extent that they can rationalize to themselves (Mazar, Amir, and Ariely 2008). …When the welfare of others is a consideration, disclosure might reduce moral concerns. Prior research has suggested that when people demonstrate ethical behavior, they often become more likely to subsequently exhibit ethical lapses (Jordan, Mullen, and Murnighan 2009; Zhong, Liljenquist, and Cain 2009). For example, people who are given an opportunity to demonstrate their own lack of prejudice are more likely to subsequently display discriminatory behavior (Monin and Miller 2001). Likewise, after a conflict of interest has been disclosed, advisors may feel that advisees have been warned and that advisors are “morally licensed” to provide biased advice. …Disclosure of a conflict of interest can also reduce the perceived immorality of giving biased advice by signaling that bias is widespread and therefore less aberrant (Schultz et al. 2007). If advice recipients’ expectations affect advisor behavior (Dana et al. 2006), then the lowered expectations for honesty that come with disclosure might allow an advisor to rationalize providing biased advice because that is exactly what the advisee expects, or should expect, to receive. …Why is the call for disclosure so popular despite how it can backfire? One possible explanation is that most people are simply not aware of disclosure’s pitfalls. At first glance, disclosure seems like a sensible remedy to a situation in which one party possesses an otherwise hidden incentive to mislead another party. A more cynical explanation would play on the Chicago Theory of Regulation (Becker 1983; Peltzman 1976; Stigler 1971), which posits that regulation typically exists not for the general benefit of society but for the benefit of the regulated groups. These entities might be aware of the ineffectiveness of disclosure but accept it because it benefits them. For example, even though consumer advocates fought hard for warning labels on cigarette packages, the tobacco industry has defended itself against litigation since then by citing the warning labels as evidence that consumers knew the risks. “What was intended as a burden on tobacco became a shield instead” (Action on Smoking and Health 2001). Moreover, even the regulators may be attracted to disclosure if they see it as absolving them of responsibility for protecting consumers by ostensibly empowering consumers to protect themselves. Disclosure may also be perceived as the lesser of evils for those who might otherwise face more substantive regulation. For example, pharmaceutical firms are often strong proponents of disclosure laws, since it is better for them (and for researchers who receive their funding) if researchers must disclose financial ties to the industry rather than actually having to sever them. This all suggests that disclosure may be problematic for more reasons than those identified by the experiments reported above. It would be a mistake, however, to conclude that disclosure is always counterproductive, as some recent laboratory research illustrates (Church and Kuang 2009; Koch and Schmidt 2009). Research on practical examples of disclosure, summarized in Full Disclosure (Fung, Graham, and Weil 2007), also shows that disclosure can have real beneficial effects. For example, following a spate of highly publicized SUV rollovers, regulations that required auto manufacturers to publicly disclose rollover ratings led to significant and rapid changes in auto design, resulting in a general decrease in the rollover risk for SUVs. Disclosure is likely to be helpful when information is disclosed in an easily digestible form (or is made available to intermediaries, e.g., ratings companies, who process it for consumers) and when it is clear how one should respond to the disclosed information. The rollover ratings met both criteria: the ratings were represented simply as one to five stars, making it easy for consumers to compare—that is, evaluate jointly—the relative rollover risks of various SUVs. Even when information isn’t presented in such a simple form, disclosure is likely to prove helpful when the recipients are savvy repeat-players who know what to do with the disclosed information, such as institutional investors, experienced attorneys, or managers in government agencies (Church and Kuang 2009; Malmendier and Shanthikumar 2007). Disclosure is much less likely to help individuals such as personal investors, purchasers of insurance, home buyers, or patients, who are unlikely to possess the knowledge or experience to know how much they should discount advice or whether they should get a second opinion in a given conflict-of-interest situation (Malmendier and Shanthikumar 2007).

“Power Posing: Brief Nonverbal Displays Affect Neuroendocrine Levels and Risk Tolerance” Carney et al 2010

As predicted, results revealed that posing in high-power (vs. low-power) nonverbal displays caused neuroendocrine and behavioral changes for both male and female participants: High-power posers experienced elevations in testosterone, decreases in cortisol, and increased feelings of power and tolerance for risk; low-power posers exhibited the opposite pattern. In short, posing in powerful displays caused advantaged and adaptive psychological, physiological, and behavioral changes – findings that suggest that embodiment extends beyond mere thinking and feeling, to physiology and subsequent behavioral choices. …The neuroendocrine profiles of the powerful differentiate them from the powerless, on two key hormones—testosterone and cortisol. In humans and other animals, testosterone levels both reflect and reinforce dispositional and situational status and dominance; internal and external cues cause testosterone to rise, increasing dominant behaviors, and these behaviors can elevate testosterone even further (Archer, 2006; Mazur & Booth, 1998). For example, testosterone rises in anticipation of a competition and as a result of a win, but drops following a defeat (e.g., Booth, Shelley, Mazur, Tharp, & Kittok, 1989), and these changes predict the desire to compete again (Mehta & Josephs, 2006). In short, testosterone levels, by reflecting and reinforcing dominance, are closely linked to adaptive responses to challenges. Archer, J. (2006). Testosterone and human aggression: An evaluation of the challenge hypothesis. Neuroscience & Biobehavioral Reviews, 30, 319–345.

Mazur, A., & Booth, A. (1998). Testosterone and dominance in men. Behavioral & Brain Sciences, 21, 353–397

Booth, A., Shelley, G., Mazur, A., Tharp, G., & Kittok, R. (1989). Testosterone and winning and losing in human competition. Hormones and Behavior, 23, 556–571.

Mehta, P.H., & Josephs, R.A. (2006). Testosterone change after losing predicts the decision to compete again. Hormones and Behavior, 50, 684–692 Power is also linked to the stress hormone cortisol: Power holders show lower basal cortisol levels and lower cortisol reactivity to stressors than powerless people do, and cortisol drops as power is achieved (Abbott et al., 2003; Coe, Mendoza, & Levine, 1979; Sapolsky, Alberts, & Altmann, 1997). Although short-term and acute cortisol elevation is part of an adaptive response to challenges large (e.g., a predator) and small (e.g., waking up), the chronically elevated cortisol levels seen in low-power individuals are associated with negative health consequences, such as impaired immune functioning, hypertension, and memory loss (Sapolsky et al., 1997; Segerstrom & Miller, 2004). Low-power social groups have a higher incidence of stress-related illnesses than high-power social groups do, and this is partially attributable to chronically elevated cortisol (Cohen et al., 2006). Thus, the power holder’s typical neuroendocrine profile of high testosterone coupled with low cortisol—a profile linked to such outcomes as disease resistance (Sapolsky, 2005) and leadership abilities (Mehta & Josephs, 2010)—appears to be optimally adaptive. Abbott, D.H., Keverne, E.B., Bercovitch, F.B., Shively, C.A., Mendoza, S.P., Saltzman, W., et al. (2003). Are subordinates always stressed? A comparative analysis of rank differences in cortisol levels among primates. Hormones and Behavior, 43, 67–82

Coe, C.L., Mendoza, S.P., & Levine, S. (1979). Social status constrains the stress response in the squirrel monkey. Physiology & Behavior, 23, 633–638

Sapolsky, R.M., Alberts, S.C., & Altmann, J. (1997). Hypercortisolism

associated with social subordinance or social isolation among

wild baboons. Archives of General Psychiatry, 54, 1137–1143

Segerstrom, S., & Miller, G. (2004). Psychological stress and the human immune system: A meta-analytic study of 30 years of inquiry. Psychological Bulletin, 130, 601–630

Cohen, S., Schwartz, J.E., Epel, E., Kirschbaum, C., Sidney, S., & Seeman, T. (2006). Socioeconomic status, race, and diurnal cortisol decline in the Coronary Artery Risk Development in Young Adults (CARDIA) study. Psychosomatic Medicine, 68, 41–50

Sapolsky, R.M. (2005). The influence of social hierarchy on primate health. Science, 308, 648–652. It is unequivocal that power is expressed through highly specific, evolved nonverbal displays. Expansive, open postures (widespread limbs and enlargement of occupied space by spreading out) project high power, whereas contractive, closed postures (limbs touching the torso and minimization of occupied space by collapsing the body inward) project low power. All of these patterns have been identified in research on actual and attributed power and its nonverbal correlates (Carney, Hall, & Smith LeBeau, 2005; Darwin, 1872/2009; de Waal, 1998; Hall, Coats, & Smith LeBeau, 2005). Hall, J.A., Coats, E.J., & Smith LeBeau, L. (2005). Nonverbal and the vertical dimension of social relations: A meta-analysis. Psychological Bulletin, 131, 898–924.

“Reality at Odds With Perceptions: Narcissistic Leaders and Group Performance”, Nevicka et al 2011:

Despite people’s positive perceptions of narcissists as leaders, it was previously unknown if and how leaders’ narcissism is related to the performance of the people they lead. In this study, we used a hidden-profile paradigm to investigate this question and found evidence for discordance between the positive image of narcissists as leaders and the reality of group performance. We hypothesized and found that although narcissistic leaders are perceived as effective because of their displays of authority, a leader’s narcissism actually inhibits information exchange between group members and thereby negatively affects group performance. Our findings thus indicate that perceptions and reality can be at odds and have important practical and theoretical implications. …For example, narcissists tend to overestimate their intelligence (Campbell, Rudich, & Sedikides, 2002), creativity (Goncalo, Flynn, & Kim, 2010), academic abilities (Robins & Beer, 2001), and leadership capabilities (Judge, LePine, & Rich, 2006). Generally, other people do not agree with narcissists’ idealized self-images and perceive narcissists as arrogant, egocentric, overly dominant, and even hostile (Paulhus, 1998). However, the context of leadership constitutes a notable exception in which narcissists tend to be judged positively. For example, individuals with high levels of narcissism receive higher leadership ratings than individuals with low levels of narcissism do (Judge et al., 2006) and tend to emerge as leaders in groups (Brunell et al., 2008; Nevicka, De Hoogh, Van Vianen, Beersma, & McIlwain, 2011). In addition, higher narcissism in U.S. presidents is associated with more positive evaluations of their leadership (Deluga, 1997). It is therefore not surprising that narcissistic characteristics are ascribed to many prominent leaders, such as Nicolas Sarkozy (De Sutter & Immelman, 2008) and Steve Jobs (Robins & Paulhus, 2001). …Of the two prior studies investigating this question, one found no effects of narcissistic leadership on performance (Brunell et al., 2008), and the other showed that organizational performance was merely more volatile, but no worse or better, because of narcissistic leaders’ risky decision making (Chatterjee & Hambrick, 2007). Unfortunately, neither of these studies examined the effects of narcissistic leaders on group dynamics, communication, and information exchange, factors that are critically important to group decision making (Stasser, 1999), group performance (De Dreu, Nijstad, & van Knippenberg, 2008), and organizational effectiveness (Zaccaro, Rittman, & Marks, 2001)…Prior research has hinted at a potentially negative effect of narcissistic individuals on group and organizational performance. For example, in one study, individuals with high levels of narcissism allocated more resources to themselves than did individuals with low levels of narcissism—at a long-term cost to other group members (Campbell, Bush, Brunell, & Shelton, 2005). However, prior research did not provide a clear link between leader’s narcissism and group or organizational performance.

“How quickly can you detect it? Power facilitates attentional orienting”, Slabu et al

Participants were assigned to a high power or control role and then performed a computerised spatial cueing task in which they were required to direct their attention to a target that had been preceded by either a valid or invalid location cue. Compared to participants in the control condition, power-holders were better able to override the misinformation provided by invalid cues. This advantage occurred only at 500 ms stimulus onset asynchrony (SOA), whereas at 1000 ms SOA, when there was more time to prepare a response, no differences were found. These findings are taken to support the growing idea that social power affects cognitive flexibility…Post-test questionnaires confirmed that these effects could not be attributed to differences in positive affect or self-efficacy. We suggest that power most affected performance during invalid trials because these required a greater degree of cognitive flexibility; individuals needed to ignore the cue and unexpectedly orient attention towards the opposite location. In line with this account, the effect was only evident at relatively short SOAs where participants had little time to prepare an appropriate response. At longer SOAs or on valid trials, the need for flexibility was lower which may explain why no effect was seen. Social power affects the way in which information is attended and discriminated (Fiske, 1993; Guinote, 2007a). Power holders have more resources and fewer constraints which gives them more attentional resources and allows them to discriminate between relevant and irrelevant information (Guinote, 2007a; Overbeck & Park, 2001). In contrast, powerless people face more constraints and environmental threats (Keltner, Gruenfeld, & Anderson, 2003). Their dependency encourages them to attend to multiple cues in the environment, in search of any potentially useful information. Thus, they treat information more equally, attending not only to the central information but also to the peripheral or distracting information (Slabu & Guinote, 2010). This overflow in information processing makes powerless people less able to respond promptly to specific situational demands, and induces attentional inflexibility (Guinote, 2007a). Fiske, S. T. (1993). Controlling other people: The impact of power on stereotyping. American Psychologist, 48(6), 621-628. doi: 10.1037/0003-066X.48.6.621

Guinote, A. (2007a). Behaviour variability and the Situated Focus Theory of Power. European Review of Social Psychology, 18, 256-295. doi: 10.1080/10463280701692813

Overbeck, J. R., & Park, B. (2001). When power does not corrupt: Superior individuation processes among powerful perceivers. Journal of Personality and Social Psychology, 81(4), 549-565. doi: 10.1037/0022-3514.81.4.549

Slabu, L., & Guinote, A. (2010). Getting what you want: Power increases the accessibility of active goals. Journal of Experimental Social Psychology, 46(2), 344-349. doi: 10.1016/j.jesp.2009.10.013 Research using basic cognitive paradigms supports these claims. For example, Guinote (2007b) showed that high power participants are better able to focus their attention to target objects and ignore the influence of irrelevant background distracters (see also Smith & Trope, 2006). A further outcome of the cognitive flexibility experienced by powerful individuals is the increased ability to adjust their actions in line with changing contextual cues. This includes the ability to suppress dominant responses and implement non-dominant ones when the task calls for non-dominant responses (Guinote, 2007b). Guinote, A. (2007b). Power affects basic cognition: Increased attentional inhibition and flexibility. Journal of Experimental Social Psychology, 43(5), 685-697. doi: 10.1016/j.jesp.2006.06.008

Smith, P. K., & Trope, Y. (2006). You focus on the forest when you’re in charge of the trees: Power priming and abstract information processing. Journal of Personality and Social Psychology, 90(4), 578-596. doi: 10.1037/0022-3514.90.4.578 For example, several studies have shown that having power increases the ability to resolve conflicts and plan action sequences; power-holders are immune to stimulus-response compatibility effects, and are better able to switch attention between the holistic and detailed components of stimuli, as changing task demands dictate (Guinote, 2007b; Smith, Jostmann, Galinsky, & van Dijk, 2008)… More broadly, our findings build on those reported by Willis, Rodriguez-Bailon and Lupianez (2011) who showed that powerful individuals can make a better use of cues present in the environment to increase their executive control (see also Smith, et al., 2008). Their data support the idea that social power can impact rudimentary processes associated with spatial orienting and control. Willis, G. B., Rodríguez-Bailón, R., Lupiáñez, J. (2011). The boss is paying attention: Power Affects the Functioning of the Attentional Networks. Social Cognition, 29(2), 166-181.

“You focus on the forest when you’re in charge of the trees: Power priming and abstract information processing”, Smith& Trope 2006

Elevated power increases the psychological distance one feels from others, and this distance, according to construal level theory (Y. Trope & N. Liberman, 2003), should lead to more abstract information processing. Thus, high power should be associated with more abstract thinking—focusing on primary aspects of stimuli and detecting patterns and structure to extract the gist, as well as categorizing stimuli at a higher level—relative to low power. In 6 experiments involving both conceptual and perceptual tasks, priming high power led to more abstract processing than did priming low power, even when this led to worse performance. Experiment 7 revealed that in line with past neuropsychological research on abstract thinking, priming high power also led to greater relative right-hemispheric activation. Trope, Y., & Liberman, N. (2003). Temporal construal. Psychological Review, 110, 403– 421 Though the abstraction hypothesis has not been directly tested, there is some research that supports it. For example, in Overbeck and Park’s (2001) experiments, high- and low-power participants interacted via e-mail with several different targets holding the opposite power role and received various kinds of information from them. Some of this information was relevant to the task at hand (e.g., Jim waited until the last minute to try to schedule a meeting), and some was irrelevant (e.g., Jim just started a jazz ensemble). Not only did participants in the high-power role recall more information overall than did the low-power participants, but they were especially superior at recalling relevant information. Thus, high-power participants focused more on primary information, a hallmark of abstract thinking. Overbeck, J. R., & Park, B. (2001). When power does not corrupt: Superior individuation processes among powerful perceivers. Journal of Personality and Social Psychology, 81, 549 –565. Portuguese participants used more abstract language to describe both their ethnic group and an outgroup when they were part of the majority (i.e., a higher power group) than when they were part of the minority (i.e., a lower power group; Guinote, 2001). Similarly, participants who played the role of judges during a task used more abstract, trait-like language in referring to themselves than did participants who were workers (Guinote, Judd, & Brauer, 2002). Guinote, A. (2001). The perception of group variability in a non-minority and a minority context: When adaptation leads to outgroup differentiation. British Journal of Social Psychology, 40, 117–132.

Guinote, A., Judd, C. M., & Brauer, M. (2002). Effects of power on perceived and objective group variability: Evidence that more powerful groups are more variable. Journal of Personality and Social Psychology, 82, 708 –721 Powerholders, more than the powerless, should thus be guided by their primary, overriding goals rather than by subordinate, incidental concerns. This would mean that powerholders are more likely to act in accordance with their core attitudes and values (Chen et al., 2001). Indeed, individuals placed in high-power roles or those higher in personality dominance have been found to express their true attitudes more during a discussion than have participants lower in power or dominance (Anderson & Berdahl, 2002). Such goal-driven behavior also has implications for stereotyping. Powerholders should be more likely to stereotype those beneath them when such stereotyping is seen as an effective means to their goals. Evidence for this has already been found in the context of the Social Influence Strategy ϫ Stereotype Match hypothesis (Vescio, Snyder, & Butz, 2003). Chen, S., Lee-Chai, A. Y., & Bargh, J. A. (2001). Relationship orientation as a moderator of the effects of social power. Journal of Personality and Social Psychology, 80, 173-187

Anderson, C., & Berdahl, J. L. (2002). The experience of power: Examining the effects of power on approach and inhibition tendencies. Journal of Personality and Social Psychology, 83, 1362–1377

Vescio, T. K., Snyder, M., & Butz, D. A. (2003). Power in stereotypically masculine domains: A social influence strategy ϫ stereotype match model. Journal of Personality and Social Psychology, 85, 1062–1078.

“Powerful People Make Good Decisions Even When They Consciously Think”, Smith et al 2008

Thought condition again had different effects on performance for the two priming conditions, F(1, 161) 54.67, prep 5 .91, Zp 2 1⁄4 :03 (see Fig. 1). Low-power participants performed significantly better after unconscious thought than after conscious thought, prep 5 .96. High-power participants performed equally well in both thought conditions and did not differ from low-power participants in the unconscious-thought condition, Fs < 1. Furthermore, our manipulations did not significantly affect participants’ confidence in and certainty of their attitudes, preps < .70, their reported effort or motivation, preps < .84, or the amount of apartment information they correctly recalled, Fs < 1. Differences in performance could not be attributed to depth of processing. When given problems requiring a complex decision, high-power participants were equally good at identifying the better choice after conscious versus unconscious thought, whereas the performance of low-power participants suffered when they consciously deliberated. These results provide further evidence that conscious and unconscious thought differ in the type of processing that occurs. The powerful seem to be able to handle so many impactful decisions, without making excessive errors, in part because they generally think more abstractly.

“Cooperation and Status in Organizations”, Eckel et al 2010

We further manipulate status by allocating the central position to the person who earns the highest, or the lowest, score on a trivia quiz. These high-status and low-status treatments are compared, and we find that the effect of organizational structure – the existence of a central position – depends on the status of the central player. Higher status players are attended to and mimicked more systematically. Punishment has differential effects in the two treatments, and is least effective in the high-status case. In this study, we ask whether social status serves as a useful mechanism for solving public goods problems. Status can act as a coordinating device, as it does in pure coordination games, with higher-status individuals more likely to be mimicked (followed) by others. In addition, in a setting with costly punishment, social status may enhance the effectiveness of punishment and reduce anti-social punishment, enhancing overall efficiency…Status is awarded by the experimenter using scores on a general-knowledge trivia quiz that is unrelated to the experimental game. The central position is given to either the high scorer (high-status treatment) or the low scorer (low-status treatment). Subjects play two games: a standard linear voluntary contribution mechanism (VCM) and a VCM with costly punishment. We find that higher-status central players are more likely to be “followed” in the key situation when the peripheral player is contributing less than the central player. We also find that high status central players punish less, and peripheral players are more responsive to punishment by a higher-status central player…Our results suggest that punishment, while important to enforcing cooperative norms in many social dilemmas, does not boost contributions in all instances. Punishment is used more readily by low-status groups, and increases overall contributions only among low-status groups. However this seems to be primarily a main effect of the punishment institution, as there is little evidence that punishment tokens levied actually increase contributions in low-status groups; indeed there is weak evidence that the response to punishment is greater in high-status groups. Retaliatory punishment of central players is seen only in the low-status groups. An unexpected consequence of these differences is that punishment is not efficiency- enhancing when the status of the central player is high. Costly punishment is used less in these groups, but contributions are not higher than without punishment. This generates a flat contribution pattern, and no differences between the VCM with and without punishment opportunities. At the other extreme, low status central players punish and are heavily punished, and make significantly less money in the experiment than any other type of subject. But the reaction of low status groups to the new environment generates a significant increase in the provision of the public good. Second, high-status agents may have a strong influence on others, as others seek their company and guidance, affecting choices and decision making by lower-status individuals. Thus high-status individuals are more likely to be mimicked or deferred to (Ball et al. 2001, Kumru and Vesterlund 2005). Imitating or learning from higher- status exemplars can help solve coordination problems (Eckel and Wilson 2007); the behavior of the higher-status individual provides an example that is observed and can be followed by others. Ball, S., C. Eckel, P. Grossman and W. Zame (2001) “Status in markets” The Quarterly Journal of Economics 116, 161-188

Kumru, C. and L. Vesterlund (2005) “The effect of status on voluntary contribution” Working paper, Department of Economics, University of Pittsburgh.

Eckel, C. and R. Wilson (2007) “Social learning in coordination games: Does status matter?” Experimental Economics, 10, 317-330 Gil-White and Henrich (2001) argue that attending to and mimicking high status individuals is a valuable strategy in a world where successful individuals may have superior information. Cultural transmission is enhanced when higher-status, successful individuals are copied by others. Copying successful individuals has evolutionary payoffs, so that humans may have evolved a preference for paying attention to and learning from high-status agents (see also Boyd and Richerson 2002, Boyd et al. 2003). Bala and Goyal (1998) capture the essence of the idea of attending to a high-status agent in a model where the presence of a commonly-observed agent, which they term the “royal family”, can have a significant impact on which among multiple equilibria is selected…Experimental research confirms the tendency of individuals to mimic high-status agents. Eckel and Wilson (2001) show that a commonly observed agent can influence equilibrium selection in a coordination game…Imitation makes the population of subjects more likely to reach a Pareto-superior, but risk- dominated, equilibrium, an outcome that rarely occurs otherwise (Cooper et al. 1990). Kumru and Vesterlund (2005) show a related result, with high-status first-movers more likely to be mimicked in a 2-person sequential voluntary contribution game. In their setting, high status enhances the ability of leaders to increase total contributions. Gil-White, F. and J. Henrich (2001) “The evolution of prestige: Freely conferred deference as a mechanism for enhancing the benefits of cultural transmission” Evolution and Human Behavior 22,165-196

Boyd, R., and P. Richerson (2002) “Group beneficial norms spread rapidly in a structured population” Journal of Theoretical Biology 215, 287-296

Boyd, R., H. Gintis, S. Bowles, and P. Richerson (2003) “The evolution of altruistic punishment” Proceedings of the National Academy of Sciences (USA) 100, 3531-3535.

Bala, V. and S. Goyal (1998) “Learning from neighbors” Review of Economic Studies 65, 595-621

Eckel, Catherine C., and Rick K. Wilson (2001) “Social learning in a social hierarchy: An experimental study.” Rice University, Unpublished manuscript

Cooper, R., D. DeJong, R. Forsythe and T. Ross (1990). “Selection criteria in coordination games: some experimental results. American Economic Review 80, 218-233

Kumru, C. and L. Vesterlund (2005) “The effect of status on voluntary contribution” Working paper, Department of Economics, University of Pittsburgh

Another good set of studies focusing on rich/powerful behavior.

2 of the primary researchers write in a 2012 NYT op-ed “Greed Prevents Good”

Now, some 25 years later, seven studies we conducted [Piff et al 2012], some on this same campus, have proved the opposite, that greed, far from being good, undermines moral behavior….Unethical behaviors among the wealthy are as timeless and pervasive as the ethical principles that try to rein them in. Our research pinpointed why wealth produces unethical conduct with such regularity: greed. Across studies, wealthier subjects expressed the conviction that greed is moral, echoing [Ivan] Boesky and Gekko and their intellectual companions (e.g., Ayn Rand). And it was their greed-is-good attitudes, we found, that gave rise to their unethical behavior. Wealth gives rise to a me-first mentality, and the ideology of unbridled self-interest serves as its lofty justification. Greg Smith is to be applauded for calling out the culture of greed at Goldman Sachs. It is a knockout blow, one as important as Ivan Boesky’s proclamation nearly a generation ago. Nobel laureate Milton Friedman famously argued that the single social responsibility of business is to increase profits as long as “it stays within the rules of the game.” The problem is, when greed for profits is the bottom line, the rules may fall by the wayside.

Relevant studies: