I’m starting to come around to the view that there is something weird going on with students these days, where they are coming into the world with rather unrealistic expectations about how they can expect to be treated. For the first time the other day, I came across the suggestion – made by a grad student – that a philosophical research talk should be a “safe space,” in which audience members are expected to be “tough yet supportive.” (I actually don’t quite know what this means – if someone is saying something totally wrong, it’s a bit hard to point that out while at the same time remaining supportive. What are you supposed to say, “you seem like a really nice person, but you’re totally wrong.” Or maybe, “well this argument doesn’t work, but keep trying, I’m sure you’ll come up with a better one next time!”)

Anyhow, as most people who are familiar with how philosophy works will know, this is not the way the discipline currently operates. Philosophy has what could best be described as an adversarial disciplinary culture, something that manifests itself most clearly in how the Q&A goes after a research talk. Basically, after people present their philosophical views, the audience members try to tear them apart. Every question is a variation on “here’s why I think you’re wrong…” It is not supportive. Also, because this is the expectation in the discipline, philosophers tend not to preface their comments with ingratiating verbiage, like “first let me thank you for the rich and thought-provoking discussion” (the way they do in political theory, of instance). Philosophers will go straight to the “here’s why I think you’re wrong” part.

I think that there are good reasons for wanting to preserve this aspect of the discipline, so I would like to explain first what I mean by “adversarial,” and then second defend these practices. (I should mention, in passing, that a great deal of the complaints about adversarialism have come from people who think that the underrepresentation of women in the discipline is a consequence of those disciplinary practices. I happen to disagree with this – law is also highly adversarial, but that doesn’t seem to deter too many women – but I’m not going to get into that too deeply.)

Also, I would like to distinguish between “being adversarial” and “being an asshole” or “being confrontational.” A lot of people in philosophy are assholes, but that is a distinct phenomenon. To illustrate the distinction, I’d like to draw a contrast between philosophy and surgery (a discipline that I happen to know well, because my wife is an academic surgeon). Surgeons are notorious assholes, a tendency that is clearly encouraged by the disciplinary culture. They are also extremely confrontational, sometimes (to me) shockingly so. For instance, they lose their temper and yell at each other a great deal – my wife’s division head used to come into her office and literally yell at her for 10-15 minutes straight. They also swear constantly. At the same time, the disciplinary culture, with respect to research talks, is weirdly (to me) non-confrontational. What this means, in practice, is that when a person gives a talk, the questions will almost always be softballs, like “how did you exclude this confounder?” or “can you say more about what you think is responsible for x?” Then, as soon as people are out in the hallway, everyone will be like, “wow, what a piece-of-shit study that was,” or “holy crap, they are killing patients right and left at that hospital,” etc. And yet they never say it to the speaker! I don’t know how many times I’ve heard surgeons complaining about awful research and terrible talks, and I’ll say “did you tell them that?” and the response is always “oh no, of course not.”

This example is illuminating, because it shows that the adversarialism of philosophical exchange is not merely a consequence of the fact that so many philosophers are assholes. As the example of surgery shows, it is perfectly possible to have a discipline full of assholes, who nevertheless sustain a non-adversarial discourse around academic research. In fact, I suspect the causality in philosophy runs the opposite direction – that being an asshole is not positively encouraged, but because of the adversarial norms, the discipline tends to attract more than its share of assholes.

The reason I’m getting into so much detail on this point is that I don’t want to be seen to be defending assholes. The fact that there are so many assholes in philosophy is, for me, a major negative for the discipline, and one of the reasons that, over the years, I’ve found it increasingly painful to be around academic philosophers. I would, however, like to defend the adversarialism of the disciplinary culture of philosophy. (I think that it retains its value despite the fact that it tends to attract assholes to the discipline.)

One other little thing. Philosophy is also, in its disciplinary culture, fundamentally a problem-creating, and not a problem-solving discipline. Here I think a useful contrast can be drawn between philosophy and economics (another discipline that I have a lot of contact with). A lot of this comes, I think, from the influence of Socrates, and of the importance of Socratic method. Basically, Socrates went around Athens causing problems for people, by taking their everyday understanding of concepts like “justice” and showing that it made no sense. Skepticism does something broadly similar, and the fascination with paradoxes and puzzles has remained central to the discipline. Philosophers have built entire careers around discovering new problems (think of Parfit’s “non-identity” problem, or Gettier, or Goodman, etc.)

I must admit that this aspect of the discipline is one that I sometimes find frustrating, particularly when you want to do something fairly innocent, like come up with a model of something. Speaking roughly, my experience has been that economists will at least sometimes want to help you, and so will make positive suggestions, along the lines of “maybe you should try doing it this way.” Philosophers, by contrast, never have any positive suggestions. Even if they appear to be offering you a cup to drink from, you can be certain it will be poisoned. This is great for encouraging critical thinking, but the discipline as a whole is a very negative one. Basically, colleagues exist to tell you why you’re wrong.

This all sounds like criticism, but it’s not. It’s actually important that philosophy operate in this way. To see why, let’s go back to the case of surgery for a moment. When I ask academic surgeons why they never pose challenging questions at research talks, the answer is usually the same – they don’t think it matters, because “it’ll never get published,” or else “the referees will catch it.” In particular, when academic surgeons make methodological errors, or they do their stats all wrong (which they often do), everyone knows that it will get picked up by referees, and so they don’t feel any obligation to make things uncomfortable for the speaker. (By contrast, in “M&M rounds,” when they review the treatment of patients in hospital, they are much more likely to pose challenging questions, precisely because there is no other system of quality control in place.)

In other words, the practice of medicine, as well as scientific work more generally, is subject to much stricter methodological constraints than philosophy is. Consider, for instance, a flaw in our thinking such as confirmation bias. We human beings are all terrible at “thinking the negative.” When it comes to testing a theory, our tendency is to look only for evidence that supports the hypothesis. What we should also be doing is figuring out what evidence would disconfirm the hypothesis, and then actively seeking that out as well, in order to establish that it is not there. (This is point of Peter Wason’s famous “2, 4, 6” test, which everyone fails. I wrote about this here last week.) There are various features of scientific method – from study design, the use of controls, the concern about replication – that all serve in various ways to control confirmation bias. (That is, in fact, the most essentially difference between medicine and quackery. The former is based on controlled studies, the latter is based on “testimonials.”)

Philosophers are just as prone to confirmation bias as anyone else. Indeed, anyone what has read the literature on confirmation bias, and understands just what a profound and pervasive bias it is, must at some point have begun to suspect that philosophy is “all confirmation bias all the time.” After I failed the Wason 2,4,6 test, I stopped to think how much time and effort I have put into thinking about what it would take to prove my own philosophical views false, or cause me to change my mind, and then, how much time I have actually spent investigating whether these conditions obtain. The answer was “very little.” I invest a tremendous amount of effort in the positive task, of working out the view and marshalling the evidence that supports it, but expend almost no effort in thinking about what would prove it wrong.

So what makes philosophy an academic discipline, rather than (as my former teacher, James Johnson used to put it) “the department of data-free speculation”? Part of the reason that I don’t have to work very hard thinking of ways that my view might be wrong is that I have colleagues who enjoy nothing better. In other words, if there are obvious blind spots in my reasoning, I can be quite confident that they will be pointed out to me, in one of those unsupportive, adversarial Q&A sessions.

This occurred to me one day at a research talk, given by a distinguished colleague of mine, who was arguing that the Canadian Charter of Rights and Freedoms was not really deontological, despites its surface grammar, but that it was really a consequentialist document. He proceeded to go through and give a consequentialist reading of all the major rights and the way they had been interpreted by the courts. During the Q&A, I asked what seemed to me the obvious question, which was “what would a deontologist have to do, to formulate a right that would be invulnerable to one of your consequentialist readings?” In other words, the question was just “imagine you were wrong, what would that look like?” Somewhat to my surprise (that was before I had read much about confirmation bias) he was stumped by the question – and even came up to me after the talk and said, “you know, I never really thought about that.”

To me, this is a great example of how the disciplinary culture of philosophy works, when it works well. Wilfrid Sellars once defined philosophy as the study of how things, in the most general sense of the term, hang together, in the most general sense of the term. We’re doing pretty abstract work, and we’re often trying to see how things fit together at a very general level. What makes us different from conspiracy theorists, or people who claim to see Jesus in their toast? Or what stops us from just making stuff up and believing it? I really think that the only thing keeping us tethered to the world is the disciplinary culture, and the fact that we have to defend ourselves, in a room full of people who have spent decades listening to arguments and identifying bad ones.