by Steve Fuller

Post-Truth Is More about Justification Than Truth

It is popular, even among philosophers, to think about ‘post-truth’ as entailing a disregard for the truth, as if the truth were foremost in the minds of people — again including philosophers – who are in search of knowledge. Yet, when Plato in the Thaeatetus presented what philosophers generally still regard as the working definition of knowledge – ‘justified true belief’ – his emphasis was on the ‘justified’ rather than the ‘true’.

Plato’s paradigm case is a lawyer who manages to get a jury to reach the correct judgement in a case but for the wrong reasons. Indeed, reflecting on the normal practice of trial lawyers, Plato seemed to suggest that it may be all too easy to draw the right conclusions for the wrong reasons – and that this is something that should worry us.

The post-truth condition amounts to a demystification – if not an outright rejection — of Plato’s original worry, which is about how one justifies beliefs rather than whether the beliefs themselves are true. The ‘post-truther’ claims that worries about how beliefs are justified amount to biasing, spinning or otherwise restricting the course of inquiry.

To be sure, there may be good or bad reasons for imposing such constraints, depending on whatever values and ends are thought to be served by acquiring knowledge of the truth. But that broadly ‘axiological’ discussion is normally occluded by talk of ‘justification’, which quickly reduces to attempts to demarcate ‘rational’ and ‘irrational’ paths to inquiry, which apply not only to the case at hand but to all cases in which knowledge of the truth might be sought. The many efforts starting with Bacon and Descartes in the early modern era to define a ‘method’ that in principle might justify any true belief are the culmination of this line of thought.

In Post-Truth: Knowledge as a Power Game, I regard this fixation on the justification of knowledge claims as involving the exercise of modal power, by which I mean control over what people come to think is possible, impossible, necessary and contingent. In the Thaeatetus, the lawyer is guilty of presenting the contingent as if it were necessary – the ‘falseness’ of what he says comes not from the falseness of his premises – each of which may be presumed to be true — but their contribution to the validity of his overall argument.

In the Republic, Plato had his own signature way of handling the matter, namely, to drive the maximum logical wedge between ‘true’ and ‘false’ so that they correspond to what is necessary and what is impossible. Thus, Plato depicted ‘true’ and ‘false’ as contraries. He could then dismiss the poets and playwrights who offered alternative visions of reality to the ‘philosophically correct’ one as dangerous purveyors of fictions who have no place in his ideal polity.

In Post-Truth, I observe that Plato’s signature was present in the open appeal to fear in the face of uncertainty if, say, American voters failed to vote for Hillary Clinton as President or British voters failed to vote to remain in the European Union. ‘Either my way or no way’, so to speak, was the modal power message. Rhetorically it amounts to converting a state of uncertainty into one of excessively high risk, so as to make oneself appear to be the embodiment of rationality and the opponent seem the face of irrationality.

It turns out that the voters failed to be moved by this rhetoric, though it did manage to convince the Oxford English Dictionary to make ‘post-truth’ 2016’s word of the year. As of this writing, voters in both countries have yet to be persuaded that they made the wrong decision. They may still change their minds, of course. But it’s unlikely that Plato’s rhetoric of imminent danger will play a role, given the amount of time that has now passed for US and UK voters to get used to outcomes that had surprised all sides.

In contrast, Bacon, Descartes and other promoters of the scientific method in the modern era have drawn the logical battle lines for modal power more narrowly. For them, ‘true’ and ‘false’ correspond to what is necessary and what is contingent – a matter of contradiction, not contrariety.

For Bacon, the art of experiment is ultimately about trying to construct a case in which one of two hypotheses that predict the same thing in every other case ends up failing to do so in the test case. Such an experiment is ‘crucial’ if it manages to reveal the contingent pretender to necessity. Karl Popper, arguably Bacon’s smartest reader since Kant, promoted this strategy to a world-view. For his part, Descartes proposed that God’s existence ensured that such a case existed, even if only in God’s mind, so we don’t need to worry about an ‘evil demon’ whose seamless representation of reality might be forever fooling us into mistaking the contingent for the necessary.

Neither Bacon nor Descartes denied the prima facie verisimilitude of most people’s claims to knowledge but they inquired further about whether such so-called ‘knowledge’ was acquired by the appropriate means. If the alleged ‘knowledge’ simply reflects an alignment between the truth and a modus operandi that is itself not especially truth-oriented, then it amounts to an accidental stumbling upon the truth, not knowledge at all: a case of epistemic contingency not necessity.

Consider cases in which astrologers correctly predict a person’s fate, a rain dance that is followed by rain or, more generally, causation that is claimed on the back of correlation. In the history of modern philosophy, ‘sensation’ or ‘sense perception’ has been the catchall term for all these simulacra of knowledge, with the stress placed on what people receive passively from the world rather than actively construct for themselves. Following Hume, a sceptical spin on ‘induction’ cast these simulacra as a pseudo-method. And in our more explicitly ‘cognitivist’ age, the same phenomena provide evidence of ‘confirmation bias’.

Post-Truth Is about the Gaming Spirit

Truth be told, ‘confirmation bias’ is a very post-truth way of talking about not only the sorts of inferences that Bacon and Descartes would regard as merely pseudo-justificatory but also the ones that they would accept as justificatory. The word ‘bias’ suggests that the inferences concerned are both motivated and restrictive. These are predicates that the post-truther is happy to work with because they suggest the workings of power over the imagination. The most natural way to appreciate this point is in the context of games.

In a gaming spirit, we would say that all of the contingent ‘unjustified’ ways of believing what is true constitute ‘cheating’ because those involved had not played by the rules. But at that point the post-truther asks: Why play this game rather than some other to determine the truth? What is so great about this set of rules? To respond by making exclusive reference to the associated benefits delivered by the established game – that is, without reference to its corresponding costs or a cost-benefit balance sheet of alternative games – is to open oneself to the charge of question-begging.

An economist would simply say that it is a failure to factor in ‘opportunity costs’. And perhaps that makes economists the original post-truthers. Put in a way that even philosophers would find hard to dispute, the post-truth condition takes very seriously that specific values are embedded in any set of rules, which serve to bias the resulting game towards players with certain skills and dispositions. In short, confirmation bias is inevitable in whatever mode of justification one adopts.

Consider the widespread judgement that homoeopathic treatments are inferior to normal ‘allopathic’ medical ones for most ailments. When medical scientists conduct the relevant tests, homoeopathy tends to perform quite poorly. Such tests normally focus on the recovery rate of the physical source of the ailment undergoing treatment. However, homoeopaths complain that this begs the question, since their brand of medicine regards the physician-patient interaction as constitutive of treatment, something which medical scientists often dismiss as a ‘placebo effect’. Such scientists think about medical treatment as being mainly about a specific part or aspect of the body rather than the whole person. Thus, homoeopaths argue that a fair test of their treatments requires the registering of subjects’ own sense of well-being, independent of the physical state of their ailments.

What I have just described is a dispute about the rules of the game – in this case, the ‘medical science’ game — in terms of which ‘winners’, ‘losers’, ‘fair play’ and ‘cheating’ are then defined. To be sure, game-talk was more common in philosophical discourse a couple of generations ago, when the later Wittgenstein was fashionable and the ‘language game’ was the paradigmatic way to address matters of justification in nearly all branches of philosophy. At roughly the same time, what economists call ‘game theory’ – arguably the most lasting theoretical contribution of the Cold War — was introduced into philosophy at a conceptually generalized, largely non-mathematical level via works such as David Lewis’ Convention. However, Wittgenstein and Lewis weren’t quite playing the same game.

Wittgenstein’s sense of game had been clearly parasitic on the anthropological task of understanding what passes as normal in an alien society that one doesn’t wish to disturb unnecessarily. That last clause made sense in the late imperial period when there was a strategic advantage to allow the natives conduct their lives in their own ways as long as they were willing to play on side with the imperialists in their games against other imperialists. That was the Realpolitik that gave rise to modern relativism. In contrast, game theory served to alert philosophers to the unintended if not paradoxical consequences of deciding to play one game rather than another in an open-ended situation. Here there is more to play for, so to speak, than in the Wittgensteinian sense of game.

The objective of the imperial game had been to occupy strategically valuable spaces, very much in the spirit of two-dimensional board games, which is reflected in the typical layout of a ‘war room’. However, the Cold War opened up what might be called the ‘meta-gaming imaginary’. The focus shifted away from players accepting the rules of a pre-existing game to their deciding the rules by which they are then configured in terms of their relative advantage in play. The most exemplary philosophical legacy of this shift in perspective has been John Rawls’ ‘original position’ in A Theory of Justice.

This shift in perspective reflected the fact that both the USA and the USSR had made universal claims for their own versions of such pivotal concepts as ‘democracy’, ‘freedom’, ‘equality’ and ‘justice’. To be sure, these versions drew largely from the same source, Western political history and philosophy. Yet on their face the conceptions proposed by the two sides were mutually exclusive: One side’s version of democracy was the other’s version of tyranny. Who to believe? The reason the Cold War was often called an ‘ideological’ struggle was that the battle was ultimately over the frame of reference in which one should understand these key concepts.

In the early days of the Cold War, the Scottish philosopher W.B. Gallie spoke of ‘essentially contested concepts’ to capture the potential of certain politically or religiously charged ideas to stymie rational debate because they are prone to incommensurable interpretations. However, Gallie believed that relatively few concepts were essentially contested. Most were more or less governed like Wittgensteinian language games. In the post-truth condition, all concepts are treated as ‘essentially contested’.

And What Becomes of Truth in the Post-Truth Condition?

By contrast, in what in Post Truth I call the ‘truth condition’, there is agreement on the frame of reference in which key concepts are interpreted. This then allows a specific set of rules to govern a game in which opposing players are disposed to agree on the outcomes and the intervening judgement calls on matters of play. The scientific method has been historically associated with this state-of-affairs, which ideally would also function as the ‘game of games’ governing everything. (Hilary Putnam came close to this idea in the 1970s in his account of science’s role in fixing the reference of terms as part of the ‘division of linguistic labour’.) However, as we have seen in the case of homoeopathy, this is easier said than done.

To be sure, the dream of an ultimate ‘truth condition’ remained quite strong during the Cold War. The American sociologist Daniel Bell famously predicted an ‘end of ideology’ as scientific thinking and its technological applications increasingly populated humanity’s decision-making environments. Later Bell spoke of our living in a ‘post-industrial society’, in which technocracy would become the accepted face of politics. His was a world in which Hillary Clinton would have easily beaten Trump and the UK would have voted to remain in the European Union. So close yet so far from true. What went wrong?

To cut a long story short, ‘technocracy’ did indeed become ‘the name of the game’ but it was not practiced in the sort of way that Bell had envisaged. His technocratic utopia had been an updated version of a logical positivist-style vision, one in which ideological differences are ultimately resolved by exchanging inflammatory words for computable data points. But instead, what happened was that the ideological differences between the USA and the USSR were displaced by a game which forced them to play on the same side because both were equally threatened by a negative outcome.

I refer here, of course, to the nuclear arms race, which featured the prospect of ‘mutually assured destruction’. It became the de facto ‘game of games’, which ended once the cost of contributing resources to it bankrupted the USSR. Thus, the original ideological differences between the USA and the USSR were never resolved in their own terms, strictly speaking. This helps to explain the continued unconsummated Marxisant yearning among Western academics who are critical of American foreign policy, both then and now.

What Bell ultimately failed to see – at least from a post-truth standpoint – is that the Cold War shifted the modal balance of power from contingency back to impossibility, from Bacon to Plato. Thus, the fear of mutually assured destruction focused minds on both sides of the Iron Curtain on ‘thinking the unthinkable’, to recall Herman Kahn’s resonant phrase from the period. It enabled an unprecedented global alignment of scientific and political authority – that is, until one of the key combatants found that it was no longer economically sustainable. In our own day, many would see the prospect of an imminent climate apocalypse as the new truth condition, whereby all sides come together to avoid the same negative outcome. Perhaps all we need is the right game to make this happen.

Steve Fuller is Auguste Comte Professor of Social Epistemology in the Department of Sociology at the University of Warwick, UK. Originally trained in history and philosophy of science, he is the author of more than twenty books. His most recent work has been concerned with the future of humanity (or ‘Humanity 2.0’). His latest books are Knowledge: The Philosophical Quest in History (Routledge, 2015); The Academic Caesar: University Leadership is Hard (Sage, 2016) and Post-Truth: Knowledge as a Power Game (Anthem, 2018). He is currently writing a book, tentatively entitled ‘Nietzschean Meditations: Late Night Thoughts of the Last Human’.