by Quentin Ruyant

One of the main tasks of philosophy is to clarify conceptual problems and sketch the landscape of possible solutions to these problems. Of course, individual philosophers often tend to defend specific positions, but what emerges at the level of the community is, generally, a landscape of possibilities.

Take, for example, the question of scientific realism: what is the status of scientific theories? Should they be interpreted as literal descriptions of reality? Or are they rather predictive instruments, tools for interacting with reality? Or perhaps they are mere social constructions? The standard way of framing this problem that emerged from philosophical discussions over the years is to decompose it into three distinct questions:

Metaphysical question: does nature, the object of scientific inquiry, exist independently of our conception and observations of it? Idealists and radical constructivists would deny that it does.

does nature, the object of scientific inquiry, exist independently of our conception and observations of it? Idealists and radical constructivists would deny that it does. Semantic question: what makes theories true? Are they literal descriptions of nature? Is there a direct correspondence between language (including formal languages or mathematical models) and the fundamental constitution of nature, or does the meaning of our theoretical statements reduce to their conditions of verification? Instrumentalists would typically opt for the latter view.

what makes theories true? Are they literal descriptions of nature? Is there a direct correspondence between language (including formal languages or mathematical models) and the fundamental constitution of nature, or does the meaning of our theoretical statements reduce to their conditions of verification? Instrumentalists would typically opt for the latter view. Epistemic question: are we in a position to know that our theories are at least approximatively true? Empiricists would say that, inasmuch as our theories pretend to say more than what is verifiable at the level of observable phenomena, we are not in a position to know that they are any more true than (perhaps unconceived) alternative theories with as much empirical confirmation.

Scientific realism is thus the position that reality exists independently of the mind, that our theories should be interpreted as literal (if approximate) descriptions of reality, and that we are in a position to know that they are (at least approximately) true.

The conceptual landscape we are considering is also constituted of arguments pro- and con- each positions. Nowadays, the semantic and metaphysical propositions of scientific realism are often accepted by philosophers (at least in the analytic tradition). Only the epistemic aspect is still under discussion.

One of the main arguments for scientific realism, once formulated by Putnam, is that realism is the only position that does not make a miracle of the predictive success of science. The point being that anti-realists have no convincing explanation for the impressive success of science (notably for making novel, unexpected predictions), while realists have a simple one: our theories work because they correctly describe reality.

Conversely, one of the main arguments against scientific realism is the so-called pessimistic meta-induction: past, abandoned theories were false after all (there are no gravitational forces as Newton had thought, only deformations of space-time according to relativity), so it is reasonable to expect that contemporary theories will also eventually be replaced by different ones. It is therefore unreasonable to believe in their truth.

Some have sought a compromise between realism [1] and anti-realism, meting the challenge of the pessimist induction by restricting realism to the structural content of theories (the lawful relations between entities, rather than the entities themselves), which, they say, is retained in theory change. That position is accordingly known as structural realism [2]. Others, for the same reasons, want to restrict realism to the concrete entities with which we causally interact. This is called entity realism [3].

Instrumentalism and quantum mechanics

As noted above, most arguments in this debate are epistemic in nature: they concern scientific knowledge in general. They don’t get into too much detail about the actual content of scientific theories, except sometimes for the purpose of illustration. The argument I wish to defend here is that, on the contrary, the specific content of scientific theories should not be overlooked in these discussions, and that one theory in particular poses a serious threat to scientific realism, namely quantum mechanics. This theory, I will argue, has no straightforward, “literal” interpretation.

If this is correct, then scientific realism loses its grip: why argue that scientific theories should be interpreted literally, if no such literal interpretation exists for our best physical theories (which purportedly address the most fundamental levels of reality)? Shouldn’t we go back to a more modest conception of the meaning of our theories and accept a more humble view of the status of our representations? Perhaps we could find a way to still accommodate some of the desirata of realism while giving up on a strict correspondence between models and reality after all.

First, let me say a word on the long-standing relationship between quantum mechanics and instrumentalism. Quantum mechanics was developed at a time when different forms of instrumentalism (a denial of the semantic proposition above, for example through a verificationist theory of meaning) were prevalent. It was also a time when philosophers and scientists entertained strong intellectual relations. Famous scientists and philosophers gathered in the Vienna Circle in the 1920’s. The circle gave birth to logical empiricism [4], a philosophical movement which durably influenced the philosophy of science. Instrumentalism faded out in the course of 20th century, after the demise of logical empiricism. Instrumentalist positions were attacked by strong arguments, both internal and external to the movement, but principally in the philosophy of language [5]. However, quantum mechanics remained, and, so to speak, became orphan of a philosophical interpretation, as illustrated by the notorious “shut up and calculate” school of thought among some practicing scientists, which began after WWII.

Unfortunately for the realist, the weirdness of quantum mechanics is here to stay. Not that the theory won’t be superseded by a better theory. It certainly will, as standard quantum field theory, which is the fusion between quantum mechanics and relativity, does not account for gravitation. However, there are strong indicators that its successor will share most of its puzzling aspects. Some of them are addressed by Bell’s theorem [6], which is largely independent of the theory itself, but rests on a few, uncontroversial empirical principles. The weird consequences of the theorem, that no local-realist theory can account for observed phenomena, have been well confirmed by experiments, such as that of Aspect in 1982 [7]. Any future theory will have to accommodate this result.

Personally, I tend to think that the theory could not have been developed without the strong instrumentalist stance of its founders, and that the empirical success of the theory calls for a compromise between the now prevalent realism of philosophers and the almost built-in instrumentalism of the theory. There are attempts to reconcile quantum mechanics with realism, but I think they face serious challenges and lead to unacceptable conclusions. Perhaps another path is preferable.

The purpose of this essay, however, is not to find this alternative path, a far too ambitious goal. More modestly, I will simply lay out the difficulties in formulating realist interpretations of quantum mechanics.

The measurement problem

There are two main difficulties facing realist interpretations of quantum mechanics: first, the measurement problem, then the problem of reference.

The measurement problem is one area where philosophers have done their job properly in clarifying the issue. In standard quantum mechanics (I will not address quantum field theory, but the problem is essentially the same) systems are described by wave-functions. Different properties can be measured on a system: its position, its momentum, its spin. A wave-function is a mathematical structure which, loosely speaking, describes the correlations between all possible measurement outcomes for these properties, including, in the case of composite systems, the possible outcomes for all combinations of measurements on distinct parts of the system, however far apart. Note that the wave-function encodes all complex measurement possibilities but not all measurements are compatible and can be performed simultaneously (think, by analogy, of a 3D object which encodes all possible 2D perspectives on that object, but only one perspective can be had at a time). Scientists call these complex possible ways of measuring a system “observables.”

The wave-function evolves according to a linear equation, the Schrödinger equation. The coefficients associated with possible outcomes for an observable are complex numbers (a weight and a phase), which entails that (again, loosely speaking) possible outcomes may interfere with one another, at least when they are not measured (imagine that from a given 2D perspective, parts of the object overlap and interfere in a destructive or constructive way).

In addition to this mathematical model, the Born rule tells us how to infer specific outcome probabilities from the model. This amounts to projecting the wave-function onto only one of the possible outcomes to get a probability, calculated from the corresponding coefficient.

The problem is this: if realism is true of standard quantum mechanics, then reality, as described by the theory, is the wave-function, which encodes all possible outcomes for all possible measurements on a system, but empirical reality, from which we test the theory, is constituted of determinate measurement outcomes for specific observables only. There is thus a gap between the model and empirical reality. The gap is filled by the Born rule, but the Born rule is not part of the physical model. It is not an object, nor a process occurring in space-time: it is only a mathematical rule. It is also relative to a way of measuring the system. How can we make sense of this?

One could think that the problem is easily solved: just interpret the wave-function as an epistemic object, describing our ignorance of a real, underlying state. That’s how probabilities were usually interpreted in classical physics after all: as reflecting our ignorance. Perhaps the wave-function can be seen as a superposition of possible states, but only one of them actually exists. However, decomposition into possible states depends on the observable. How could the system know in advance how it will be observed? Remember, also, that “possible states” of an observable which are not measured can interfere with each other, they all potentially contribute to the final outcome for the observable which is eventually measured, at least statistically. How can they do that if they did not all exist? But if they all exist, why do we ever observe determinate outcomes, and not superpositions thereof? What happens during a measurement?

As I said, philosophers did a great job of clarifying the problem; here is one of its formulations (that I take from Maudlin [8]) in terms of a trilemma: three propositions which cannot all be accepted together:

The wave-function is a complete description of the state of a system The wave function evolves according to a linear dynamic (the Schrödinger equation) All measurements have determinate outcomes

Following (1), a system can be viewed, for any observable, as a “superposition of states.” Following (2), a superposition of states will necessarily evolve into another superposition of states, there is no physical “projection.” Let us accept (1) and (2) and describe the state of a measuring apparatus coupled with a system as a composite wave-function. The measuring apparatus is also a physical system after all. At the end of an experiment, the system+apparatus will be in a superposition of states, contradicting (3): the experiment does not have a determinate outcome.

The logical conclusion of the argument is that we have to abandon one of the three propositions. This is the measurement problem.

The prospects of realist solutions

One benefit of this formulation is that it allows for a classification of possible solutions to the problem. I won’t enter into too much technical detail here, but no solution is entirely satisfying.

Rejecting (1) or (2) involves completing the theory with additional structure.

Bohmian mechanics [9] is the most conservative move. It rejects (1) by postulating punctual (and causally idle) particles in addition to the wave-function, just as in classical physics. It restores determinism but is obliged to postulate instantaneous interactions at a distance, and is perhaps the less well apt to reconcile relativity and quantum mechanics (relativity has a notoriously complicated relationship with non-locality and simultaneity).

Another possibility in this class of solutions is implemented by modal interpretations [9], initially proposed by van Fraassen, which complete the theory with a dynamical, privileged observable for which there is a determinate state at any time (and which should eventually coincide with the observable that is measured). Bohmian mechanics can actually be read as a modal interpretation where the privileged observable is static and is always the position. Modal interpretations also require a notion of absolute simultaneity, because the state of a non-local system is, by construction, determinate at a particular instant. For this reason, they are hard to reconcile with relativity theory.

Some theories reject (2) by postulating random physical projection processes, also called collapses. An early possibility, envisaged by Wigner and von Newman, was a collapse induced by conscious observers [9], but this solution seems too anthropocentric and dualistic to be acceptable and it has never been precisely articulated anyway. More concrete formulations are GRW and CSL theories [9], which postulate spontaneous collapses (respectively, discrete and continuous). These theories make distinct predictions from standard QM, but they come with parameters (the rate and strength of collapses) which are fine-tuned to stay compatible with current empirical confirmations of the theory. This is a bit ad-hoc, obviously.

All these theories postulate additional mathematical structure for a non-empirical purpose: to save our realist presumptions. Arguably, this is a case of “domestication of science by metaphysics” [10] that we would have liked to avoid. A more concrete price to pay lies in the difficulties in reconciling these additional structures with relativity theory and in formulating consistent quantum field versions of these theories. Needless to say these theories are rarely considered by physicists. In any case, as far as they postulate additional structure, they cannot be considered straightforward, literal readings of quantum mechanics: they are distinct theories.

Rejecting (3) seems prima facie absurd: how could empirical outcomes, the ones which serve as very tests for the theory, not be determinate?

A solution, proposed by Everett, is to view them as relative to an observer [9]. The idea is that the wave-function of the universe evolves into relatively independent branches (accounted for by the theory of decoherence [11]), and that experimenters are only ever situated in one of the branches, from which empirical outcomes seem determinate. Each measurement outcome is instantiated in a separate branch. Following this proposal come the many-mind and many-world interpretations [9].

The move is tentative: we could have a realist theory without the cost of additional structure, only if we abandoned certain common-sense intuitions and accept that trillions of alternative, unaccessible worlds are instantiated each millisecond. Alternatively, we could view the universe as the interrelated set of all physically possible worlds and their complete evolutions, and each of our instant selves located somewhere in this huge block-universe. But the devil is in the details. The many-mind interpretation comes with a very strange ontology (infinitely many minds inhabiting every one of us at any instant, following diverging branches) plus a problematic commitment to dualism and epiphenomenalism. The many-world interpretation does not seem to make sense of probabilities: why talk of probabilities if all outcomes are equally real? We cannot invoke ignorance probabilities here: there is nothing relevant that we ignore. We know that every outcome will occur. Moreover, why the Born rule? Shouldn’t every outcome have an equal probability?

There are attempts to solve the issue by grounding probabilities on rationality constraints on epistemic agents (the proposal was made by physicist Deutsch and improved by philosopher Wallace [12]). Probabilities would be subjective and correspond to bets on future outcomes. The Born rule can be retrieved as the only rule which satisfies certain symmetry constraints on the assignation of probabilities to quantum states. However, it is not clear that these solutions succeed. Bets are based on past empirical results, but without a more robust conception of probabilities (something that could be linked to a statistical distribution in the multiverse) there is no reason to think that past empirical results are representative of the whole universe: if this theory is true, it seems that we are not rationally entitled to believe that quantum theory is true! [13] In any case, what are we really willing to bet for if all our future selves equally exist? Why should we care? Rationality constraints supposedly have a normative aspect (they are not psychological laws: they tell us what we should do), but here, what is the point, exactly?

The many-worlds interpretation also requires decoherence, but perhaps the theory of decoherence also depends on a more robust interpretation of probabilities [14]. It also presupposes a distinction between systems and their environment, but does the universe as a whole have an environment?

The problem of reference

Enough about the measurement problem. There is another challenge which realist theories face: the problem of reference.

Following the semantic proposition of scientific realism, there should be a correspondence between mathematical models and real entities. However, the wave function is not the kind of structure that can easily be mapped to real entities, as commonly understood. The problem, then, is with connecting this picture to our everyday experience.

Take the electromagnetic field of classical physics: it assigns vectors to every position in space-time. The object is not too difficult to represent. Just imagine that a vector is some kind of property of the field at a specific location. But what kind of object is a wave-function? The wave-function, interpreted as a field, does not assign specific properties to space-time points: it lives in an abstract mathematical space of almost infinite dimensions (called the configuration space). Traditionally, these mathematical dimensions are construed as the degrees of freedom of different particles. Fine, but that supposes that particles exist in addition to the wave-function: what if, following many-worlds or GRW or CSL theories, we wish to view the wave-function as an autonomous object, as “all there is”? It is not easy, from this abstract representation of an object with infinitely many degrees of freedom to recover the “manifest image of the world”: the familiar, 3+1 dimensional space-time, filled with ordinary objects, in which we perform the empirical tests of our theories. Some authors are ready to bite the bullet (for example Albert) and hope that our familiar space-time is somehow emergent on the configuration space, but many think that there is a problem, and that a physical theory should be able to tell us what exists in space-time [15].

A possible solution is to think of the wave-function as a “nomic” entity, akin to laws of nature, or dispositions, rather than a concrete object, and to supplement the theory with an additional structure which describes the bearers of these dispositions, or the followers of these laws, in ordinary space-time. Bohmian mechanics already has the particles for that purpose. Proposals for GRW include the peculiar space-time points where the collapses occur, aka “flashes,” or matter density fields [16]. Note that this additional structure is idle and serves no empirical purpose: aren’t we, again, trying to force scientific theories into the mold of our metaphysical prejudices?

Furthermore, thinking of wave-functions as laws hardly makes sense: laws of nature do not vary in space and time. But thinking of wave-functions as dispositions is difficult too, because the wave-function is a non-local object and these dispositions cannot be assigned to local bearers directly (as Esfeld and Egg observe in a forthcoming paper [17]). They should be assigned to “configurations of stuff” instead. In the end, we could be left with a huge abstract structure which would represent the disposition of the universe to evolve. Not very enthusing.

In sum, in any of these theories the wave-function cannot be dispensed with because it does all the predictive job, so to speak; but its ontological status and the way it is connected to ordinary objects of our experience remains somewhat obscure.

Other interpretations

The framing of the discussion so far has been realist. The problem of reference directly stems from a realist commitment, that the structure of the theory should correspond to real entities, and implicit in the formulation of the measurement problem is that the wave-function describes a state (if not a complete state) which evolves with time.

Let us now say a word on a few anti-realist interpretations. Contemporary physicists do not agree on the correct interpretation of quantum mechanics. Some of them are realists and explicitly defend the many-world interpretation (for example Carroll [18]). Perhaps some have a non-explicit collapse interpretation in mind, and others don’t have an interpretation at all (the above mentioned “shut-up and calculate” school), or stick to the vague Copenhagen interpretation [9] (roughly, realism with regard to classical objects and instrumentalism with regard to quantum states), or the more elaborated consistent histories approach [9]. In any case, having a clearly articulated ontological interpretation is not necessary for all scientists. It seems that instrumentalism is perfectly fine, for all practical purposes. The question of realism is more a philosophical issue, although there has been a number of influential scientists, from Newton to Einstein, who held strong metaphysical views underlying their theoretical reflections. This is probably the case for many contemporary scientists too, but it’s a subject for another time.

Of course, an immediate benefit of instrumentalism is that it trivially eschew the problems above. All that is required is that the theory works. However, there are more subtle ways of throwing light on the mysteries of quantum mechanics from an instrumental perspective.

Viewing quantum mechanics as a theory of information, or as a generalized probability theory [9] (which roughly amounts to revising classical logic!), has gained in popularity in recent decades, notably in the field of quantum computing. The most sophisticated proposals include quantum bayesianism, or Q-Bism [9]. This is clearly an anti-realist move (Bayesianism is a subjective theory of probabilities). Following these views, the wave-function is epistemic: it describes our knowledge of reality. All that these theories say is that our inferences on the physical must follow counterintuitive logical rules, the rules of quantum logic.

Another possibility is to adapt Everett’s relative state formulation with an anti-realist twist (and without the many worlds). This is the relational interpretation [9], proposed by physicist Rovelli, which holds that wave-functions do not describe objective states, but relations between physical observers (any physical system) and observed systems. There is no objective “view from nowhere.” Other similar attempts relativize the wave function to frames of reference. There are also perspectival modal interpretations in this vein, which attempt to resolve the problem of compatibility between relativity theory and standard modal interpretations. These kinds of view are indeed relativistic in spirit: one could say that they push relativity theory one step further, through a relativization of all physical states (not only of space-time coordinates) to physical observers.

Finally the transactional interpretation [9] proposed by Cramer postulates that measurements are transactions between emitters and receivers. A transaction involves the combination of a retarded wave (going forward in time) and an advanced wave (a wave going back in time, traditionally discarded as “unphysical” by physicists). The interpretation proposes a narrative in pseudo-time, where retarded waves are offered by emitters to absorbers, which respond with advanced waves. An absorber is selected and a transaction occurs. The interpretation retrieves the Born rule in a nice, elegant way from the formalism. I did not classify it as a realist interpretation because transactions are not really physical processes, they do not occur in space-time, and they are sometimes said to be some sort of perceptive relations (for example by Kastner [19]). Perhaps this interpretation, with its emphasis on relational aspects (the transactions), is not too far from relational interpretations.

I am somewhat sympathetic to these proposals, in particular when they retain a realist component and do not force us to go back to hard-core idealism. But they too face challenges and work remains to be done to obtain fully consistent and metaphysically explicit theories. They also probably need to confront more general arguments that are part of the epistemological debate on scientific realism.

Conclusion

So what shall we conclude? It seems to me that debates on scientific realism in the epistemology of science should pay attention to the content of scientific theories. In the case of quantum mechanics, the problem is that there is no uncontentious literal interpretation of the theory. The closest is actually instrumentalist in flavor: it tells us to apply a mathematical rule to calculate outcome probabilities from a model, which is not very realist-like. Arguably, all other interpretations (including the many-world interpretation, pace many of its defenders) are conjectures layered on top of the theory. Furthermore, all encounter difficulties: either we complete the theory with an ad-hoc structure which plays no predictive role and threatens the compatibility with relativity theory, or we face conceptual problems in the interpretation of probabilities (or we are forced into the adoption of a dubious many-mind ontology). And in any case, the ontological status of the wave-function remains quite a bit obscure. No solution to date is entirely satisfying.

Note that concerning the epistemological debate we started with, suffice to say that there is more than one possible metaphysical interpretation or theory, all compatible with the same empirical data, and none of them being more natural or straightforward [20]. This amounts to undermining scientific realism about quantum mechanics: what interpretation or theory should we be realist about? Perhaps future developments will convince everyone that one realist interpretation or the other is the right one, but I don’t find that prospects are very good at the moment.

What about anti-realist interpretations then? If there is no straightforward sense in which the bare content of quantum mechanics can be said to “correspond to” reality, shouldn’t we amend the correspondence theory of truth in consequence, and adopt a more pragmatic stance toward scientific theories? I am personally inclined to think so, but admittedly, matters are not simple. At least some of the desirata of scientific realism are quite sensible. The challenge is to formulate a position that does not fall prey of standard objections against instrumentalism (in particular the “no miracle argument,” but the semantic arguments as well), and that recovers the “manifest image of the world,” the common-sense intuition that there are objective states at a macroscopic scale in a low dimensional space-time. All this, if possible, without the vagueness of the Copenhagen interpretation. With these difficulties standing before us, it is no wonder that many authors prefer to accommodate one or the other realist interpretation.

Let us remain optimistic though: quantum mechanics is weird, and we probably shouldn’t get away with its weirdness, which means that there is a lot of really exciting philosophical work to do!

_____

Quentin Ruyant is a PhD student in the philosophy of physics in Rennes, France. His thesis is on the potential implications of structural realism on the interpretation of quantum mechanics. He blogs at Philosophie des Sciences.

[1] Scientific Realism, SEP.

[2] Structural Realism, SEP.

[3] Entity realism, Wiki entry. See also Massimo’s recent post: On the Reality of Atoms and Subatomic Particles.

[4] On the Vienna circle and on logical empiricism.

[5] Popper criticized verificationism as early as 1934 (see: The Logic of Scientific Discovery). Quine’s “Two dogmas of empiricism” (1951) and Kuhn’s The Structure of Scientific Revolutions are among the most cited criticisms of logical empiricists’ positions. Kripke (in Naming and Necessity, 1980) and Putnam (“The Meaning of ‘Meaning,’” 1975) are often credited for their arguments against descriptivism, a semantic theory underlying logical empiricist’s positions.

[6] Bell’s Theorem, SEP.

[7] Aspect, A., Dalibard, J., and Roger, G. (1982), “Experimental test of Bell’s Inequalities using time-varying analyzers,” Physical Review Letters, 49:1804–1807.

[8] Maudlin, D., (1995) “Three Measurement Problems.”

[9] Here are some resources for the many interpretations of quantum mechanics: Bohmian mechanics; modal interpretations; Wigner-von Newman interpretation; collapse theories (GRW and CSL); Everett’s relative state formulation; many worlds and many minds; Copenhagen interpretation; consistent histories approach; quantum Bayesianism; quantum logic and probabilities; relational interpretations; transactional interpretation.

[10] Ladyman, Ross and Spurett vehemently argued against this attitude in Every Thing Must Go.

[11] The Role of Decoherence in Quantum Mechanics, SEP entry.

[12] Quantum Probability and Decision Theory, Revisited, arxiv.

[13] Against the Empirical Viability of the Deutsch Wallace Approach to Quantum Mechanics, PhilScience Archive.

[14] Many Worlds: Decoherent or Incoherent?, PhilScience Archive.

[15] See for example Wave function ontology, PhilPapers.

[16] On the common structure of Bohmian mechanics and the Ghirardi–Rimini–Weber theory, PhilPapers.

[17] Primitive ontology and quantum state in the GRW matter density theory, PhilSci Archive.

[18] Why the Many-Worlds Formulation of Quantum Mechanics Is Probably Correct, Preposterous Universe.

[19] The Transactional Interpretation of Quantum Mechanics, IEET.

[20] Underdetermination of Scientific Theory, SEP entry.