Though it is named after Daniel Kahneman's Thinking, Fast and Slow, the central thesis of Hanno Sauer's book is that Kahneman's distinction is too crude. Famously, Kahneman divided the cognitive mind into two parts: System I, a bundle of automatic, unconscious, modular processes that generate fast intuitive judgments; and System II, the system we use to reach judgments through slow, conscious, effortful thought. Sauer argues that we need a further distinction, originally proposed by the psychologist Keith Stanovich (2011): "System II" is composed of two dissociable sub-processes.

One sub-process, which retains the name "System II," enables us to activate, sustain, and manipulate representations in working memory. This is the kind of cognitive functioning measured by intelligence tests: "System II houses raw processing power, with little or no regard to how that power is used" (21-22). Hence the need for the second sub-process, which monitors and guides how System II's processing power is used. Sauer calls this "System III." System III's main job is to perform intuition override: to call an intuitive judgment into question, direct System II to critically reflect upon it, and -- if reflection shows the intuition to be unjustified -- overturn it (2). In a slogan: System II is clever, but it needs System III to be wise.[1]

The best evidence for the psychological reality of this distinction comes from cases of rationalization. The rationalizer uses conscious, complex reasoning to defend her preexisting view (System II) while failing to subject this view itself to reflective, critical scrutiny (System III). Sauer points to Jonathan Haidt's studies on moral dumbfounding (Haidt 2001) as a paradigm case (29-31). Haidt presented subjects with descriptions of harmless violations of moral taboos: for example, two siblings having consensual sex with contraception. The typical subject would try to justify their intuition that the act is wrong by pointing to some harm it would cause, only to be reminded by the experimenter that, as the case was described, the relevant harm would not occur. Crucially, even after all such attempts at justification failed, most subjects still stood by their intuition: "It's just wrong!" What's going on here? Sauer's plausible diagnosis is that these subjects are recruiting System II's intellectual horsepower in service of their intuitive judgments, without engaging the intuition-questioning capacities of System III. By showing how System III can break down while System II is hard at work, moral dumbfounding and other examples of uncritical rationalization provide evidence that these processes are distinct.

The primary aim of Sauer's book is to defend this "triple process theory" and show that it can help us to better understand moral cognition in particular. (Stanovich, the creator of the triple process theory, does not focus on moral judgment). Though moral judgment involves all three kinds of cognition, Sauer argues that "the key to competent moral judgment is how subjects manage and, if necessary, override their intuitions" (43). Thus his main task is to show how successful moral judgment depends on System III, as distinguished from System II. But the book also has a secondary aim: Sauer claims that the triple-process account of moral judgment has substantive normative implications. This normative argument will be the main focus of my critical comments. But first, let's return to Sauer's descriptive psychological project, which takes up the bulk of the book.

Chapter 1 begins with a concise summary of the classic distinction between System I and System II and how this distinction has been applied to moral judgment. This chapter is a quick, clear, up-to-date introduction to the dual-process literature that I would happily assign in a graduate seminar. Chapter 2 then presents the triple process theory and reviews the main lines of evidence in favor of distinguishing System III from System II. One line of evidence, already mentioned, comes from cases where System III breaks down while System II continues functioning. Beyond moral dumbfounding, other cases of selective System III breakdown include delusional disorders, in which subjects construct complex, coherent narratives to justify their delusional beliefs while appearing completely unable to question those beliefs (22), and moral disengagement, in which subjects construct rationalizations to make self-serving immoral behaviors appear morally justified (75-76). Another line of empirical support comes from individual differences (23-25): when a cognitive task requires one to override an initial snap judgment, subjects' success is best predicted not by their general cognitive ability (System II), but instead by their disposition to reflect critically (System III).

Having defended the triple process theory, Sauer sets out in Chapter 3 to apply it. One of these applications is the normative argument to which we will turn shortly. The other major task is to show how the triple process theory sheds light on the sources of error in moral judgment (56-83). This part of the chapter surveys a dizzying array of work on errors in moral judgment. I'll admit that here I began to lose the argumentative forest for the empirical trees. Some failures of moral judgment -- namely, those involving rationalization -- the triple process theory clearly helps to explain. But it was not apparent to me how the triple process theory adds to our understanding of some of the other moral errors Sauer reviews, such as zero-sum thinking (67-68), compassion fade (70), or ingroup bias (77-78). Sauer explains these errors by appeal to failures in "moral mindware," by which he means knowledge and concepts that aid moral judgment (see 64-65). But since the concept of "mindware" doesn't appeal to the System II / III distinction, I don't see why the same explanation wouldn't be available to a classic dual-process theorist. So, while I'm convinced that the triple-process approach is theoretically fruitful, I'm not sure that its explanatory reach is as broad as Sauer thinks.

Let me now turn to Sauer's argument that the triple process theory has substantive normative implications (46-56): "Triple Process moral psychology vindicates some moral judgments at the expense of others" (46). More specifically, his claim is that "the Triple Process account shows that progressive moral intuitions are epistemically and morally preferable to conservative moral intuitions" (46-47). Set aside the question of what moral views count as progressive vs. conservative: I am more interested in the form of Sauer's argument than the particular conclusion he uses it to support.[2]

The argument starts from experiments showing that subjects more disposed to engage in reflective, System III thinking are more likely to endorse progressive moral views, while less reflective subjects are more likely to endorse conservative views (53-54, 62-64). This empirical observation provides the basis for what Sauer calls a vindicating argument for progressivism (49). Subjects who are better at System III reflection are more successful in various non-moral cognitive tasks (23-25), indicating that reflective System III thinking is epistemically superior to mere System II rationalization. So, the data show that subjects who employ an epistemically superior method -- System III thinking -- are more likely to endorse progressive moral views. Since beliefs arrived at by an epistemically superior method are more likely to be true, we can conclude that progressive moral views are more likely to be true (49).

The question I want to ask is: can an argument of this form help to resolve any debates in moral philosophy? Suppose that A and B are philosophers who disagree about some moral claim p. A believes that p is true; B believes that p is false. Now, Sauer comes along and presents an experiment demonstrating that subjects who are better at System III reflection are more likely to agree with B that p is false, while subjects who are less reflectively disposed are more likely to agree with A that p is true. Upon hearing this, should A reduce her confidence in her belief that p?

I don't think so, and here's why. Our hypothetical experiment shows that the experimental subjects who believe that p formed their beliefs by an epistemically inferior method, i.e. unreflective thought. Does this give us reason to think that A formed her belief that p by means of an epistemically inferior, because insufficiently reflective, method? Only if we assume that the method by which A formed her belief that p is the same as the method by which the experimental subjects formed their belief that p. But there is good reason to doubt this assumption. The experimental subjects came to believe that p after brief, casual contemplation at the prompting of an experimenter. By contrast, A is a professional philosopher. She's been thinking about whether p for years; she's read books and articles arguing for p and against p; she's actively sought out objections to her view that p from colleagues like B; in the classroom, she's done her best to defend ~p against her students' objections. Not only are these epistemic methods different, they differ along exactly the dimension that is at issue: namely, System III reflection. To be trained as a philosopher is, in large part, to be trained in the use of System III. We are taught to call our most basic intuitions into question and to charitably consider the strongest arguments against our own views. So, granting that the subjects in our hypothetical experiment came to judge that p because they failed to reflect on their intuitions, I see no reason to infer that a philosopher who judges that p does so because she has failed to reflect on her intuitions. Most people unreflectively believe that they know that they have hands; that does not imply that Timothy Williamson's belief that he knows he has hands is unreflective.

The upshot is that Sauer's normative argument loses its force as soon as it is applied to any question about which philosophers disagree. Any such application will require an inferential step from "experimental subjects who judge that p are less reflective than experimental subjects who judge that ~p" to "philosophers who judge that p are less reflective than philosophers who judge that ~p." But this inference is not warranted. If a question is up for serious philosophical debate, that is strong evidence that people actively engaged in System III reflection can reach conflicting verdicts about it. Now, if there is a moral view on which nearly everyone who engages in any System III reflection agrees, then Sauer's argument would support that view, since it would be reasonable to infer that anyone who believed the opposite view was being insufficiently reflective. Sauer's progressivism is so broadly defined that it may be such a case. But this shows the limits of the argument's reach: it can only be used to support a view that is already the philosophical consensus.

For similar reasons, I doubt that we will make much progress in moral philosophy by studying the cognitive biases of non-philosophers. Implicit in the broad debunking methodology pursued by Sauer and other moral psychologists (e.g. Greene 2008) are two assumptions that, at minimum, need more defense. The first is that the cognitive methods employed by philosophers are not significantly different from the cognitive methods of non-philosophers. This assumption is implicit in any inference from data about the latter to conclusions about the former. As I've said, I think this assumption is questionable. You don't have to believe that philosophers are paragons of epistemic virtue to think that the kind of cognitive work that goes into writing a dissertation is different from that involved in filling out a survey.

The second, and I think deeper, assumption underlying Sauer's methodology is a certain diagnosis of moral disagreement: moral disagreement is the product of flawed, biased cognition. If two people disagree about a moral question, that must be because one of them is subject to some cognitive "bug" (47). Thus the way to resolve a moral debate is to discover which side is committing a cognitive error. This motivates the debunking project: to resolve the philosophical debate about whether p, you pose the same question to experimental subjects, looking to uncover some bias or flaw in their thinking. Discovering that your subjects who endorse p are unreflective or subject to "contaminated moral mindware" (76), you infer that the philosophers who endorse p must be committing the same error, and declare victory for Team ~p. While I've argued that this inference is too hasty, I think it makes sense if you are convinced that most moral disagreements are explained by persistent cognitive biases.

I accept a different diagnosis of moral disagreement: moral philosophy is hard.[3] The moral truth is difficult to discover; even moral platitudes tend to resist easy explanation. When two philosophers disagree about a moral question, that isn't because one of them is secretly biased or thinking defectively. There is disagreement in moral philosophy for the same reason there is disagreement in any serious field of inquiry: the answers to the questions aren't obvious, and so intelligent, thoughtful, reflective people can assess the balance of evidence differently.

REFERENCES

Greene, Joshua. 2008. The secret joke of Kant's soul. In Walter Sinnott-Armstrong (ed.), Moral Psychology, Vol. 3: The Neuroscience of Morality: Emotion, Brain Disorders, and Development, pp. 35-80. Cambridge: MIT Press.

Haidt, Jonathan. 2001. The emotional dog and its rational tail. Psychological Review, 108: 814-834.

Stanovich, Keith. 2011. Rationality and the Reflective Mind. Oxford: Oxford University Press.