How (And When) To Think Like A Philosopher

Enlarge this image Getty Images/iStockphoto Getty Images/iStockphoto

As an undergraduate, I majored in philosophy — a purportedly useless major, except that it teaches you how to think, write and speak.

The skills I was learning from working through papers and arguments extended well beyond the coursework itself, yielding habitual patterns of reasoning that made me a more discerning scientist, a more careful writer and a better thinker all around. Within and beyond philosophy, I was learning to spot poor arguments, uncover hidden assumptions, tease out subtle implications and recognize false dichotomies. (It was around this time that my then boyfriend, now husband, jokingly gifted me a modified light box with a button that I could press to light up the damning message: "Distinction blurred!")

Of course, good thinking isn't the sole province of philosophy. Training in any discipline or area of expertise teaches you habits of mind that — hopefully — lead to better performance in that domain. But philosophy is unusual in its explicit focus on the structure of arguments across a broad range of topics, from the meaning of words to the nature of knowledge, from ethics to animal minds. It makes sense, then, that training in philosophy might be unusual in its potential to yield general-purpose tools for better thinking.

So what do these tools for better thinking look like?

In a new article published in Aeon, philosopher Alan Hájek presents a "philosophy tool kit," sharing some common philosophical moves that apply both within and beyond academic philosophy. What Hájek offers isn't logic or probability theory (though that's useful, too), but rather common heuristics or "rules of thumb" that help philosophers quickly identify problematic claims or assumptions. These heuristics are intended to make difficult reasoning tasks easier for the philosophically untrained, though Hájek is clear that "there are no shortcuts to profundity" — the tools are a starter set, not a complete kit nor blueprints for the construction of worthwhile arguments.

Several of Hájek's tools involve questioning assumptions in the way a claim or question is posed. For instance, asking what the right thing to do is presupposes that there is a single right thing to do. In some cases, that presupposition is wrong: There could be no right thing to do, or there could be many right things to do. Encountering "the" should therefore set off a minor alarm, prompting you to consider whether the presupposition is warranted.

A second tool comes in handy when evaluating a claim that is supposed to apply to many cases. Here it's appropriate to look for counterexamples that might disconfirm or limit the scope of the claim, but where might such counterexamples be found? It's often effective to consider extreme or "edge" cases. For instance, in considering whether there are counterexamples to the claim that everything has a cause, one might consider what caused the first cause, or the set of all causes. As a more pedestrian example, evaluating the claim that "everyone likes a good book" might prompt you to consider extremes in age (do newborns like a good book?) or kinds of people (do supervillains like a good book?).

For additional tools, and a more complete exposition, readers are encouraged to read Hájek's article. But I want to end with a final reflection for readers of 13.7. If these thinking tools are so useful, why do we need special training to acquire them? Why aren't they built into our cognitive machinery, or acquired through our years of experience evaluating claims and arguments in everyday life?

Hájek identifies one reason in his discussion of what he calls the "contrastive-stress heuristic": We fall prey to various cognitive biases that can lead us astray. One example is confirmation bias, the tendency to seek and favor evidence that supports, rather than challenges, the hypotheses we (prefer to) believe. A strategy like systematically considering alternative possibilities — common to both philosophical and scientific thinking — is useful, in part, because it helps overcome this bias.

But here's a second (and more speculative) hypothesis for why many habits of philosophical thinking might not come naturally. The hypothesis is that some tools for critical evaluation run counter to another valuable set of tools: our tools for effective social engagement. These tools help us make sense of what someone is saying by encouraging us to interpret underspecified claims in the most positive light; they help us coordinate conversation by establishing common ground.

When someone asks us for "the right thing to do," we're inclined to engage in the conversation they've invited us to engage in: one in which we assume there is a right thing to do, and we help them to find it. When someone says "everyone likes a good book," we understand them to be telling us something about the kinds of people relevant to our current conversational context, not something true of every single person.

If this is right, then some forms of critical evaluation and philosophical thinking are hard because they force us to suspend other habits of mind; habits that serve us well when our goal is to engage or persuade or befriend, but less well when our goal is to arrive at a precise characterization of what's true, or of what follows from what. The trick, then, is not only to acquire Hájek's philosophy tool kit, but to know when to use it.

Tania Lombrozo is a psychology professor at the University of California, Berkeley. She writes about psychology, cognitive science and philosophy, with occasional forays into parenting and veganism. You can keep up with more of what she is thinking on Twitter: @TaniaLombrozo