First published Wed Apr 27, 2011; substantive revision Mon Jun 12, 2017

Debates about scientific realism are closely connected to almost everything else in the philosophy of science, for they concern the very nature of scientific knowledge. Scientific realism is a positive epistemic attitude toward the content of our best theories and models, recommending belief in both observable and unobservable aspects of the world described by the sciences. This epistemic attitude has important metaphysical and semantic dimensions, and these various commitments are contested by a number of rival epistemologies of science, known collectively as forms of scientific antirealism. This article explains what scientific realism is, outlines its main variants, considers the most common arguments for and against the position, and contrasts it with its most important antirealist counterparts.

1. What is Scientific Realism?

1.1 Epistemic Achievements versus Epistemic Aims

It is perhaps only a slight exaggeration to say that scientific realism is characterized differently by every author who discusses it, and this presents a challenge to anyone hoping to learn what it is. Fortunately, underlying the many idiosyncratic qualifications and variants of the position, there is a common core of ideas, typified by an epistemically positive attitude toward the outputs of scientific investigation, regarding both observable and unobservable aspects of the world. The distinction here between the observable and the unobservable reflects human sensory capabilities: the observable is that which can, under favorable conditions, be perceived using the unaided senses (for example, planets and platypuses); the unobservable is that which cannot be detected this way (for example, proteins and protons). This is to privilege vision merely for terminological convenience, and differs from scientific conceptions of observability, which generally extend to things that are detectable using instruments (Shapere 1982). The distinction itself has been problematized (Maxwell 1962; Churchland 1985; Musgrave 1985; Dicken & Lipton 2006) and defended (Muller 2004, 2005; cf. Turner 2007 regarding the distant past). If it is problematic, this is arguably a concern primarily for certain forms of antirealism, which adopt an epistemically positive attitude only with respect to the observable. It is not ultimately a concern for scientific realism, which does not discriminate epistemically between observables and unobservables per se.

Before considering the nuances of what scientific realism entails, it is useful to distinguish between two different kinds of definition in this context. Most commonly, the position is described in terms of the epistemic achievements constituted by scientific theories (and models—this qualification will be taken as given henceforth). On this approach, scientific realism is a position concerning the actual epistemic status of theories (or some components thereof), and this is described in a number of ways. For example, most people define scientific realism in terms of the truth or approximate truth of scientific theories or certain aspects of theories. Some define it in terms of the successful reference of theoretical terms to things in the world, both observable and unobservable. (A note about the literature: “theoretical term”, prior to the 1980s, was standardly used to denote terms for unobservables, but will be used here to refer to any scientific term, which is now the more common usage.) Others define scientific realism not in terms of truth or reference, but in terms of belief in the ontology of scientific theories. What all of these approaches have in common is a commitment to the idea that our best theories have a certain epistemic status: they yield knowledge of aspects of the world, including unobservable aspects. (For definitions along these lines, see Smart 1963; Boyd 1983; Devitt 1991; Kukla 1998; Niiniluoto 1999; Psillos 1999; and Chakravartty 2007a.)

Another way to think about scientific realism is in terms of the epistemic aims of scientific inquiry (van Fraassen 1980: 8; Lyons 2005). That is, some think of the position in terms of what science aims to do: the scientific realist holds that science aims to produce true descriptions of things in the world (or approximately true descriptions, or ones whose central terms successfully refer, and so on). There is a weak implication here to the effect that if science aims at truth, and scientific practice is at all successful, the characterization of scientific realism in terms of aim may then entail some form of characterization in terms of achievement. But this is not a strict implication, since defining scientific realism in terms of aiming at truth does not, strictly speaking, suggest anything about the success of scientific practice in this regard. For this reason, some take the aspirational characterization of scientific realism to be too weak (Kitcher 1993: 150; Devitt 2005: n. 10; Chakravartty 2007b: 197; for skepticism about scientific aim-talk more generally, see Rowbottom 2014)—it is compatible with the sciences never actually achieving, and even the impossibility of their achieving, their aim as conceived on this view of scientific realism. Most scientific realists commit to something more in terms of achievement, and this is assumed in what follows.

1.2 The Three Dimensions of Realist Commitment

The description of scientific realism as a positive epistemic attitude toward theories, including parts putatively concerning the unobservable, is a kind of shorthand for more precise commitments (Kukla 1998: ch. 1; Niiniluoto 1999: ch. 1; Psillos 1999: Introduction; Chakravartty 2007a: ch. 1). Traditionally, realism more generally is associated with any position that endorses belief in the reality of something. Thus, one might be a realist about one’s perceptions of tables and chairs (sense datum realism), or about tables and chairs themselves (external world realism), or about mathematical entities such as numbers and sets (mathematical realism), and so on. Scientific realism is a realism about whatever is described by our best scientific theories—from this point on, “realism” here denotes scientific realism. But what, more precisely, is that? In order to be clear about what realism in the context of the sciences amounts to, and to differentiate it from some important antirealist alternatives, it is useful to understand it in terms of three dimensions: a metaphysical (or ontological) dimension; a semantic dimension; and an epistemological dimension.

Metaphysically, realism is committed to the mind-independent existence of the world investigated by the sciences. This idea is best clarified in contrast with positions that deny it. For instance, it is denied by any position that falls under the traditional heading of “idealism”, including some forms of phenomenology, according to which there is no world external to and thus independent of the mind. This sort of idealism, however, though historically important, is rarely encountered in contemporary philosophy of science. More common rejections of mind-independence stem from neo-Kantian views of the nature of scientific knowledge, which deny that the world of our experience is mind-independent, even if (in some cases) these positions accept that the world in itself does not depend on the existence of minds. The contention here is that the world investigated by the sciences—as distinct from “the world in itself” (assuming this to be a coherent distinction)—is in some sense dependent on the ideas one brings to scientific investigation, which may include, for example, theoretical assumptions and perceptual training; this proposal is detailed further in section 4. It is important to note in this connection that human convention in scientific taxonomy is compatible with mind-independence. For example, though Psillos (1999: xix) ties realism to a “mind-independent natural-kind structure” of the world, Chakravartty (2007a: ch. 6) argues that mind-independent properties are often conventionally grouped into kinds (see also Boyd 1999; Humphreys 2004: 22–25, 35–36, and cf. the “promiscuous realism” of Dupré 1993).

Semantically, realism is committed to a literal interpretation of scientific claims about the world. In common parlance, realists take theoretical statements at “face value”. According to realism, claims about scientific objects, events, processes, properties, and relations (I will use the term “scientific entity” as a generic term for these sorts of things henceforth), whether they be observable or unobservable, should be construed literally as having truth values, whether true or false. This semantic commitment contrasts primarily with those of certain “instrumentalist” epistemologies of science, which interpret descriptions of unobservables simply as instruments for the prediction of observable phenomena, or for systematizing observation reports. Traditionally, instrumentalism holds that claims about unobservable things have no literal meaning at all (though the term is often used more liberally in connection with some antirealist positions today). Some antirealists contend that claims involving unobservables should not be interpreted literally, but as elliptical for corresponding claims about observables. These positions are described in more detail in section 4.

Epistemologically, realism is committed to the idea that theoretical claims (interpreted literally as describing a mind-independent reality) constitute knowledge of the world. This contrasts with skeptical positions which, even if they grant the metaphysical and semantic dimensions of realism, doubt that scientific investigation is epistemologically powerful enough to yield such knowledge, or, as in the case of some antirealist positions, insist that it is only powerful enough to yield knowledge regarding observables. The epistemological dimension of realism, though shared by realists generally, is sometimes described more specifically in contrary ways. For example, while many realists subscribe to the truth (or approximate truth) of theories understood in terms of some version of the correspondence theory of truth (as suggested by Fine 1986a and contested by Ellis 1988), some prefer a truthmaker account (Asay 2013) or a deflationary account of truth (Giere 1988: 82; Devitt 2005; Leeds 2007). Though most realists marry their position to the successful reference of theoretical terms, including those for unobservable entities (Boyd 1983, and as described by Laudan 1981), some deny that this is a requirement (Cruse & Papineau 2002; Papineau 2010). Amidst these differences, however, a general recipe for realism is widely shared: our best scientific theories give true or approximately true descriptions of observable and unobservable aspects of a mind-independent world.

1.3 Qualifications and Variations

The general recipe for realism just described is accurate so far as it goes, but still falls short of the degree of precision offered by most realists. The two main sources of imprecision thus far are found in the general recipe itself, which makes reference to the idea of “our best scientific theories” and the notion of “approximate truth”. The motivation for these qualifications is perhaps clear. If one is to defend a positive epistemic attitude regarding scientific theories, it is presumably sensible to do so not merely in connection with any theory (especially when one considers that, over the long history of the sciences up to the present, some theories were not or are not especially successful), but rather with respect to theories (or aspects of theories, as we will see momentarily) that would appear, prima facie, to merit such a defense, viz. our best theories (or aspects thereof). And it is widely held, not least by realists, that even many of our best scientific theories are likely false, strictly speaking, hence the importance of the notion that theories may be “close to” the truth (that is, approximately true) even though they are false. The challenge of making these qualifications more precise, however, is significant, and has generated much discussion.

Consider first the issue of how best to identify those theories that realists should be realists about. A general disclaimer is in order here: realists are generally fallibilists, holding that realism is appropriate in connection with our best theories even though they likely cannot be proven with absolute certainty; some of our best theories could conceivably turn out to be significantly mistaken, but realists maintain that, granting this possibility, there are grounds for realism nonetheless. These grounds are bolstered by restricting the domain of theories suitable for realist commitment to those that are sufficiently mature and non-ad hoc (Worrall 1989: 153–154; Psillos 1999: 105–108). Maturity may be thought of in terms of the well established nature of the field in which a theory is developed, or the duration of time a theory has survived, or its survival in the face of significant testing; and the condition of being non-ad hoc is intended to guard against theories that are “cooked up” (that is, posited merely) in order to account for some known observations in the absence of rigorous testing. On these construals, however, both the notion of maturity and the notion of being non-ad hoc are admittedly vague. One strategy for adding precision here is to attribute these qualities to theories that make successful, novel predictions. The ability of a theory to do this, it is commonly argued, marks it as genuinely empirically successful, and the sort of theory to which realists should be more inclined to commit (Musgrave 1988; Lipton 1990; Leplin 1997; White 2003; Hitchcock & Sober 2004; Barnes 2008; for a dissenting view, see Harker 2008; cf. Alai 2014).

The idea that with the development of the sciences over time, theories are converging on (“moving in the direction of”, “getting closer to”) the truth, is a common theme in realist discussions of theory change (for example, Hardin & Rosenberg 1982 and Putnam 1982). Talk of approximate truth is often invoked in this context and has produced a significant amount of often highly technical work, conceptualizing the approximation of truth as something that can be quantified, such that judgments of relative approximate truth (of one proposition or theory in comparison to another) can be formalized and given precise definitions. This work provides one possible means by which to consider the convergentist claim that theories can be viewed as increasingly approximately true over time, and this possibility is further considered in section 3.4.

A final and especially important qualification to the general recipe for realism described above comes in the form of a number of variations. These species of generic realism can be viewed as falling into three families or camps: explanationist realism; entity realism; and structural realism. There is a shared principle of speciation here, in that all three approaches are attempts to identify more specifically the component parts of scientific theories that are most worthy of epistemic commitment. Explanationism recommends realist commitment with respect to those parts of our best theories—regarding (unobservable) entities, laws, etc.—that are in some sense indispensable or otherwise important to explaining their empirical success—for instance, components of theories that are crucial in order to derive successful, novel predictions. Entity realism is the view that under conditions in which one can demonstrate impressive causal knowledge of a putative (unobservable) entity, such as knowledge that facilitates the manipulation of the entity and its use so as to intervene in other phenomena, one has good reason for realism regarding it. Structural realism is the view that one should be a realist, not in connection with descriptions of the natures of things (like unobservable entities) found in our best theories, but rather with respect to their structure. All three of these positions adopt a strategy of selectivity, and this and the positions themselves are considered further in section 2.3.

Arguably, the fact that realists have endeavored to qualify their view and propose variations of it, as described above, suggests a collective moral: though some (especially earlier) discussions of realism give the impression that it is an attitude pertaining to science across the board, this is likely too coarse a way to understand the position. Adopting a realist attitude toward the content of scientific theories does not entail that one believes all such content, but rather that one believes those aspects, including unobservable aspects, regarding which one takes such belief to be warranted, thus indicating a realism about those things more specifically. In a similar spirit, some argue for another sort of specificity, suggesting that the best (or only good) arguments for realism are formulated by concentrating on the details of specific cases—the so-called “first-order evidence” of scientific investigation itself. For example, leveraging a case study of Jean Perrin’s argument in 1908 for the reality of unobservable molecules, Achinstein (2002: 491–495) contends that even taking certain realist-friendly assumptions for granted, a compelling argument for realism about any given entity can only be given in terms of the empirical evidence concerning that entity, not by means of more general philosophical arguments. (For similar views, see Magnus & Callender 2004: 333–336 and Saatsi 2010; for skepticism about this, see Dicken 2013 and Park 2016.)

2. Considerations in Favor of Scientific Realism (and Responses)

2.1 The Miracle Argument

The most powerful intuition motivating realism is an old idea, commonly referred to in recent discussions as the “miracle argument” or “no miracles argument”, after Putnam’s (1975a: 73) claim that realism “is the only philosophy that doesn’t make the success of science a miracle”. The argument begins with the widely accepted premise that our best theories are extraordinarily successful: they facilitate empirical predictions, retrodictions, and explanations of the subject matters of scientific investigation, often marked by astounding accuracy and intricate causal manipulations of the relevant phenomena. What explains this success? One explanation, favored by realists, is that our best theories are true (or approximately true, or correctly describe a mind-independent world of entities, laws, etc.). Indeed, if these theories were far from the truth, so the argument goes, the fact that they are so successful would be miraculous. And given the choice between a straightforward explanation of success and a miraculous explanation, clearly one should prefer the non-miraculous explanation, viz. that our best theories are approximately true (etc.). (For elaborations of the miracle argument, see J. Brown 1982; Boyd 1989; Lipton 1994; Psillos 1999: ch. 4; Barnes 2002; Lyons 2003; Busch 2008; Frost-Arnold 2010; and Dellsén 2016.)

Though intuitively powerful, the miracle argument is contestable in a number of ways. One skeptical response is to question the very need for an explanation of the success of science in the first place. For example, van Fraassen (1980: 40; see also Wray 2007, 2010) suggests that successful theories are analogous to well-adapted organisms—since only successful theories (organisms) survive, it is hardly surprising that our theories are successful, and therefore, there is no demand here for an explanation of success. It is not entirely clear, however, whether the evolutionary analogy is sufficient to dissolve the intuition behind the miracle argument. One might wonder, for instance, why a particular theory is successful (as opposed to why theories in general are successful), and the explanation sought may turn on specific features of the theory itself, including its descriptions of unobservables. Whether such explanations need be true, though, is a matter of debate. While most theories of explanation require that the explanans be true, pragmatic theories of explanation do not (van Fraassen 1980: ch. 5). More generally, any epistemology of science that does not accept one or more of the three dimensions of realism—commitment to a mind-independent world, literal semantics, and epistemic access to unobservables—will thereby present a putative reason for resisting the miracle argument. These positions are considered in section 4.

Some authors contend that the miracle argument is, in fact, an instance of fallacious reasoning called the base rate fallacy (Howson 2000: ch. 3; Lipton [1991] 2004: 196–198; Magnus & Callender 2004). Consider the following illustration. There is a test for a disease for which the rate of false negatives (negative results in cases where the disease is present) is zero, and the rate of false positives (positive results in cases where the disease is absent) is one in ten (that is, disease-free individuals test positive 10% of the time). If one tests positive, what are the chances that one has the disease? It would be a mistake to conclude that, based on the rate of false positives, the probability is 90%, for the actual probability depends on some further, crucial information: the base rate of the disease in the population (the proportion of people having it). The lower the incidence of the disease at large, the lower the probability that a positive result signals the presence of the disease.

By analogy, using the success of a scientific theory as an indicator of its approximate truth (assuming a low rate of false positives—cases in which theories far from the truth are nonetheless successful) is arguably, likewise, an instance of the base rate fallacy. The success of a theory does not by itself suggest that it is likely approximately true, and since there is no independent way of knowing the base rate of approximately true theories, the chances of it being approximately true cannot be assessed. Worrall (unpublished, Other Internet Resources) maintains that these contentions are ineffective against the miracle argument because they crucially depend on a misleading formalization of it in terms of probabilities (cf. Menke 2014; for a criticism of the miracle argument based on a different probabilistic framing in terms of likelihoods, see Sober 2015: 912–915).

2.2 Corroboration

One motivation for realism in connection with at least some unobservables comes by way of “corroboration”. If an unobservable entity is putatively capable of being detected by means of a scientific instrument or experiment, this may well form the basis of a defeasible argument for realism concerning it. If, however, that same entity is putatively capable of being detected by not just one, but rather two or more different means of detection—forms of detection that are distinct with respect to the apparatuses they employ and the causal mechanisms and processes they are described as exploiting in the course of detection—this may serve as the basis of a significantly enhanced argument for realism (cf. Eronen 2015). Hacking (1983: 201; see also Hacking 1985: 146–147) gives the example of dense bodies in red blood platelets that can be detected using different forms of microscopy. Different techniques of detection, such as those employed in light microscopy and transmission electron microscopy, make use of very different sorts of physical processes, and these operations are described theoretically in terms of correspondingly different causal mechanisms. (For similar examples, see Salmon 1984: 217–219 and Franklin 1986: 166–168, 1990: 103–115.)

The argument from corroboration thus runs as follows. The fact that one and the same thing is apparently revealed by distinct modes of detection suggests that it would be an extraordinary coincidence if the supposed target of these revelations did not, in fact, exist. The greater the extent to which detections can be corroborated by different means, the stronger the argument for realism regarding their putative target. The argument here can be viewed as resting on an intuition similar to that underlying the miracle argument: realism based on apparent detection may be only so compelling, but if different, theoretically independent means of detection produce the same result, suggesting the existence of one and the same unobservable, then realism provides a good explanation of the consilient evidence, in contrast with the arguably miraculous state of affairs in which theoretically independent techniques produce the same result in the absence of a shared target. The idea that techniques of (putative) detection are often constructed or calibrated precisely with the intention of reproducing the outputs of others, however, may stand against the argument from corroboration. Additionally, van Fraassen (1985: 297–298) argues that scientific explanations of evidential consilience may be accepted without the explanations themselves being understood as true, which once again raises questions about the nature of scientific explanation.

2.3 Selective Optimism/Skepticism

In section 1.3, the notion of selectivity was introduced as a general strategy for maximizing the plausibility of realism, particularly with respect to scientific unobservables. This strategy is adopted in part to square realism with the widely accepted view that most if not all of even our best theories are false, strictly speaking. If, nevertheless, there are aspects of these theories that are true (or close to the truth) and one is able to identify these aspects, one might then plausibly cast one’s realism in terms of an epistemically positive attitude toward those aspects of theories that are most worthy of epistemic commitment. The most important variants of realism to implement this strategy are explanationism, entity realism, and structural realism. (For related work pertaining to the notion of selectivity more generally, see R. Miller 1987: chs. 8–10; Fine 1991; Jones 1991; Musgrave 1992; Harker 2013; and Peters 2014.)

Explanationists hold that a realist attitude can be justified in connection with unobservables described by our best theories precisely when appealing to those unobservables is indispensable or otherwise important to explaining why these theories are successful. For example, if one takes successful novel prediction to be a hallmark of theories worthy of realist commitment generally, then explanationism suggests that, more specifically, those aspects of the theory that are essential to the derivation of such novel predictions are the parts of the theory most worthy of realist commitment. In this vein, Kitcher (1993: 140–149) draws a distinction between the “presuppositional posits” or “idle parts” of theories, and the “working posits” to which realists should commit. Psillos (1999: chs. 5–6) argues that realism can be defended by demonstrating that the success of past theories did not depend on their false components:

it is enough to show that the theoretical laws and mechanisms which generated the successes of past theories have been retained in our current scientific image. (1999: 108)

The immediate challenge to explanationism is to furnish a method with which to identify precisely those aspects of theories that are required for their success, in a way that is objective or principled enough to withstand the charge that realists are merely rationalizing post hoc, identifying the explanatorily crucial parts of past theories with aspects that have been retained in our current best theories. (For discussions, see Chang 2003; Stanford 2003a,b; Elsamahi 2005; Saatsi 2005a; Lyons 2006; Harker 2010; Cordero 2011; Votsis 2011; and Vickers 2013.)

Another version of realism that adopts the strategy of selectivity is entity realism. On this view, realist commitment is based on a putative ability to causally manipulate unobservable entities (like electrons or gene sequences) to a high degree—for example, to such a degree that one is able to intervene in other phenomena so as to bring about certain effects. The greater the ability to exploit one’s apparent causal knowledge of something so as to bring about (often extraordinarily precise) outcomes, the greater the warrant for belief (Hacking 1982, 1983; cf. B. Miller 2016; Cartwright 1983: ch. 5; Giere 1988: ch. 5; on causal warrant more generally, see Egg 2012). Belief in scientific unobservables thus described is here partnered with a degree of skepticism about scientific theories more generally, and this raises questions about whether believing in entities while withholding belief with respect to the theories that describe them is a coherent or practicable combination (Morrison 1990; Elsamahi 1994; Resnik 1994; Chakravartty 1998; Clarke 2001; Massimi 2004). Entity realism is especially compatible with and nicely facilitated by the causal theory of reference associated with Kripke (1980) and Putnam ([1975b] 1985: ch. 12), according to which one can successfully refer to an entity despite significant or even radical changes in theoretical descriptions of its properties; this allows for stability of epistemic commitment when theories change over time. Whether the causal theory of reference can be applied successfully in this context, however, is a matter of dispute (see Hardin & Rosenberg 1982; Laudan 1984; Psillos 1999: ch. 12; McLeish 2005, 2006; Chakravartty 2007a: 52–56; and Landig 2014; see Weber 2014 for a case study on genes).

Structural realism is another view promoting selectivity, but in this case it is the natures of unobservable entities that are viewed skeptically, with realism reserved for the structure of the unobservable realm, as represented by certain relations described by our best theories. All of the many versions of this position fall into one of two camps: the first emphasizes an epistemic distinction between notions of structure and nature; the second emphasizes an ontological thesis. The epistemic view holds that our best theories likely do not correctly describe the natures of unobservable entities, but do successfully describe certain relations between them. The ontic view suggests that the reason realists should aspire only to knowledge of structure is that the traditional concept of entities that stand in relations is metaphysically problematic—there are, in fact, no such things, or if there are such things, they are in some sense emergent from or dependent on their relations. One challenge facing the epistemic version is that of articulating a concept of structure that makes knowledge of it effectively distinct from that of the natures of entities. The ontological version faces the challenge of clarifying the relevant notions of emergence and/or dependence. (On epistemic structural realism, see Worrall 1989; Psillos 1995, 2006; Votsis 2003; and Morganti 2004; regarding ontic structural realism, see French 1998, 2006, 2014; Ladyman 1998; Psillos 2001, 2006; Ladyman & Ross 2007; and Chakravartty 2007a: ch. 3. See Frigg & Votsis 2011 for an extensive critical survey).

3. Considerations Against Scientific Realism (and Responses)

3.1 The Underdetermination of Theory by Data

Lined up in opposition to the various motivations for realism presented in section 2 are a number of important antirealist arguments, all of which have pressed realists either to attempt their refutation, or to modify their realism accordingly. One of these challenges, the underdetermination of theory by data, has a storied history in twentieth century philosophy more generally, and is often traced to the work of Duhem ([1906] 1954: ch. 6; this is not an argument for underdetermination as such, but is regarded as sowing the seeds). In remarks concerning the confirmation of scientific hypotheses (in physics, which he contrasted with chemistry and physiology), Duhem noted that a hypothesis cannot be used to derive testable predictions in isolation. To derive predictions one also requires “auxiliary” assumptions, such as background theories, hypotheses about instruments and measurements, etc. If subsequent observation and experiment produces data that conflict with those predicted, one might think that this reflects badly on the hypothesis under test, but Duhem pointed out that given all of the assumptions required to derive predictions, it is no simple matter to identify where the error lies. Different amendments to one’s overall set of beliefs regarding hypotheses and theories will be consistent with the data. A similar result is commonly associated with the later “confirmational holism” of Quine (1953), according to which experience (including, of course, that associated with scientific testing) does not confirm or disconfirm individual beliefs per se, but rather the set of one’s beliefs taken as a whole. This sort of contention is now commonly referred to as the “Duhem-Quine thesis” (Quine 1975; see Ben-Menahem 2006 for a historical introduction).

How then does this give rise to underdetermination, a presumptive concern for realism? The argument from underdetermination proceeds as follows: let us call the relevant, overall sets of scientific beliefs “theories”; different, conflicting theories are consistent with the data; the data exhaust the evidence for belief; therefore, there is no evidential reason to believe one of these theories as opposed to another. Given that the theories differ precisely in what they say about the unobservable (their observable consequences—the data—are all shared), a challenge to realism emerges: the choice of which theory to believe is underdetermined by the data. In contemporary discussions, the challenge is usually presented using slightly different terminology. Every theory, it is said, has empirically equivalent rivals—that is, rivals that agree with respect to the observable, but differ with respect to the unobservable. This then serves as the basis of a skeptical argument regarding the truth of any particular theory the realist may wish to endorse. Various forms of antirealism then suggest that hypotheses and theories involving unobservables are endorsed, not merely on the basis of evidence that may be relevant to their truth, but also on the basis of other factors that are not indicative of truth as such (see sections 3.2, and 4.2–4.4). (For recent explications, see van Fraassen 1980: ch. 3; Earman 1993; Kukla 1998: chs. 5–6; and Stanford 2001.)

The argument from underdetermination is contested in a number of ways. One might, for example, distinguish between underdetermination in practice (or at a time) and underdetermination in principle. In the former case, there is underdetermination only because the data that would support one theory or hypothesis at the expense of another is unavailable, pending foreseeable developments in experimental technique or instrumentation. Here, realism is arguably consistent with a “wait and see” attitude, though if the prospect of future discriminating evidence is poor, a commitment to future realism may be questioned thereby. In any case, most proponents of underdetermination insist on the idea of underdetermination in principle: the idea that there are always (plausible) empirically equivalent rivals no matter what evidence may come to light. In response, some argue that the principled worry cannot be established, since what counts as data is apt to change over time with the development of new techniques and instruments, and with changes in scientific background knowledge, which alter the auxiliary assumptions required to derive observable predictions (Laudan & Leplin 1991). Such arguments may rest, however, on a different conception of observation than that assumed by many antirealists (defined above, in terms of human sensory capacities). (For other responses, see Okasha 2002; van Dyck 2007; Busch 2009; and Worrall 2011.)

Stanford (2006, 2015) proposes a historicized version of the argument from underdetermination, suggesting that the history of science reveals a recurring “problem of unconceived alternatives”: typically, at any given time, there are theories that do not occur to scientists but which are just as well confirmed by the available evidence as those that are, in fact, accepted; furthermore, over time, such unconceived theories often supplant the theories adopted by historical actors as the relevant science develops. (For discussions and evaluations of this challenge, see Chakravartty 2008; Godfrey-Smith 2008; Magnus 2010; Lyons 2013; Mizrahi 2015: 139–146; and Egg 2016; cf. Wray 2008 and Khalifa 2010 on the related notion of “underconsideration”, as described by Lipton 1993, [1991] 2004: 151–163.)

3.2 Skepticism about Inference to the Best Explanation

One especially important reaction to concerns about the alleged underdetermination of theory by data gives rise to another leading antirealist argument. This reaction is to reject one of the key premises of the argument from underdetermination, viz. that evidence for belief in a theory is exhausted by the empirical data. Many realists contend that other considerations—most prominently, explanatory considerations—play an evidential role in scientific inference. If this is so, then even if one were to grant the idea that all theories have empirically equivalent rivals, this would not entail underdetermination, for the explanatory superiority of one in particular may determine a choice (Laudan 1990; Day & Botterill 2008). This is a specific exemplification of a form of reasoning by which “we infer what would, if true, provide the best explanation of [the] evidence” (Lipton [1991] 2004: 1). To put a realist-sounding spin on it:

one infers, from the premise that a given hypothesis would provide a “better” explanation for the evidence than would any other hypothesis, to the conclusion that the given hypothesis is true. (Harman 1965: 89)

Inference to the best explanation (as per Lipton’s formulation) seems ubiquitous in scientific practice. The question of whether it can be expected to yield knowledge of the sort suggested by realism (as per Harman’s formulation) is, however, a matter of dispute.

Two difficulties are immediately apparent regarding the realist aspiration to infer truth (approximate truth, existence of entities, etc.) from hypotheses or theories that are judged best on explanatory grounds. The first concerns the grounds themselves. In order to judge that one theory furnishes a better explanation of some phenomenon than another, one must employ some criterion or criteria on the basis of which the judgment is made. Many have been proposed: simplicity (whether of mathematical description or in terms of the number or nature of the entities involved); consistency and coherence (both internally, and externally with respect to other theories and background knowledge); scope and unity (pertaining to the domain of phenomena explained); and so on. One challenge here concerns whether virtues such as these can be defined precisely enough to permit relative rankings of explanatory goodness. Another challenge concerns the multiple meanings associated with some virtues (consider, for example, mathematical versus ontological simplicity). Another concerns the possibility that such virtues may not all favor any one theory in particular. Finally, there is the question of whether these virtues should be considered evidential or epistemic, as opposed to merely pragmatic. What reason is there to think, for instance, that simplicity is an indicator of truth? Thus, the ability to rank theories with respect to their likelihood of being true may be questioned.

A second difficulty facing inference to the best explanation concerns the pools of theories regarding which judgments of relative explanatory efficacy are made. Even if scientists are likely reliable rankers of theories with respect to truth, this will not lead to belief in a true theory (in some domain) unless that theory in particular happens to be among those considered. Otherwise, as van Fraassen (1989: 143) notes, one may simply end up with “the best of a bad lot”. Given the widespread view, even among realists, that many and perhaps most of our best theories are false, strictly speaking, this concern may seem especially pressing. However, in just the way that the realist strategy of selectivity (see section 2.3) may offer responses to the question of what it could mean for a theory to be close to the truth without being true simpliciter, this same strategy may offer the beginnings of a response here. That is to say, the best theory of a bad lot may nonetheless describe unobservable aspects of the world in such a way as to meet the standards of variants of realism including explanationism, entity realism, and structural realism. (For a book-length treatment of inference to the best explanation, see Lipton [1991] 2004; for defenses, see Lipton 1993; Day & Kincaid 1994; and Psillos 1996, 2009: part III; for critiques, see van Fraassen 1989: chs. 6–7; Ladyman, Douven, Horsten, & van Fraassen 1997; Wray 2008; and Khalifa 2010.)

3.3 The Pessimistic Induction

Worries about underdetermination and inference to the best explanation are generally conceptual in nature, but the so-called pessimistic induction (also called the “pessimistic meta-induction”, because it concerns the “ground level” inductive inferences that generate scientific theories and law statements) is intended as an argument from empirical premises. If one considers the history of scientific theories in any given discipline, what one typically finds is a regular turnover of older theories in favor of newer ones, as scientific knowledge develops. From the point of view of the present, most past theories must be considered false; indeed, this will be true from the point of view of most times. Therefore, by enumerative induction (that is, generalizing from these cases), surely theories at any given time will ultimately be replaced and regarded as false from some future perspective. Thus, current theories are also false. The general idea of the pessimistic induction has a rich pedigree. Though neither endorse the argument, Poincaré ([1905] 1952: 160), for instance, describes the seeming “bankruptcy of science” given the apparently “ephemeral nature” of scientific theories, which one finds “abandoned one after another”, and Putnam (1978: 22–25) describes the challenge in terms of the failure of reference of terms for unobservables, with the consequence that theories incorporating them cannot be said to be true. (For a summary of different formulations, see Wray 2015.)

Contemporary discussion commonly focuses on Laudan’s (1981) argument to the effect that the history of science furnishes vast evidence of empirically successful theories that were later rejected; from subsequent perspectives, their unobservable terms were judged not to refer and thus, they cannot not be regarded as true or even approximately true. (If one prefers to define realism in terms of scientific ontology rather than reference and truth, one may rephrase the worry in terms of the mistaken ontologies of past theories from later perspectives.) Responses to this argument generally take one of two forms, the first stemming from the qualifications to realism outlined in section 1.3, and the second from the forms of realist selectivity outlined in section 2.3—both can be understood as attempts to restrict the inductive basis of the argument in such a way as to foil the pessimistic conclusion. For example, one might contend that if only sufficiently mature and non-ad hoc theories are considered, the number whose central terms did not refer and/or that cannot be regarded as approximately true is dramatically reduced (see references, section 1.3). Or, the realist might grant that the history of science presents a record of significant referential discontinuity, but contend that, nevertheless, it also presents a record of impressive continuity regarding what is properly endorsed by realism, as recommended by explanationists, entity realists, or structural realists (see references, section 2.3). (For other responses, see Leplin 1981; McAllister 1993; Chakravartty 2007a: ch. 2; Doppelt 2007; Nola 2008; Roush 2010, 2015; and Fahrbach 2011. Hardin & Rosenberg 1982; Cruse & Papineau 2002; and Papineau 2010 explore the idea that reference is irrelevant to approximate truth).

In just the way that some authors suggest that the miracle argument is an instance of fallacious reasoning—the base rate fallacy (see section 2.1)—some suggest that the pessimistic induction is likewise flawed (Lewis 2001; Lange 2002; Magnus & Callender 2004). The argument is analogous: the putative failure of reference on the part of past successful theories, or their putative lack of approximate truth, cannot be used to derive a conclusion regarding the chances that our current best theories do not refer to unobservables, or that they are not approximately true, unless one knows the base rate of non-referring or non-approximately true theories in the relevant pools. And since one cannot know this independently, the pessimistic induction is fallacious. Again, analogously, one might argue that to formalize the argument in terms of probabilities, as is required in order to invoke the base rate fallacy, is to miss the more fundamental point underlying the pessimistic induction (Saatsi 2005b). One might read the argument simply as cutting a supposed link between the empirical success of scientific theories and successful reference or approximate truth, as opposed to relying on an inductive inference per se. If even a few examples from the history of science demonstrate that theories can be empirically successful and yet fail to refer to the central unobservables they invoke, or fail to be what realists would regard as approximately true, this constitutes a prima facie challenge to the notion that only realism can explain the success of science.

3.4 Skepticism about Approximate Truth

The regular appeal to the notion of approximate truth by realists has several motivations. The widespread use of abstraction (that is, incorporating some but not all of the relevant parameters into scientific descriptions) and idealization (distorting the natures of certain parameters) suggests that even many of our best theories and models are not strictly correct. The common realist contention that theories can be viewed as gradually converging on the truth as scientific inquiry advances suggests that such progress is amenable to assessment or measurement in some way, if only in principle. And even for realists who are not convergentists as such, the importance of cashing out the metaphor of theories being close to the truth is pressing in the face of antirealist assertions to the effect that the metaphor is empty. The challenge to make good on the metaphor and explicate, in precise terms, what approximate truth could be, is one source of skepticism about realism. Two broad strategies have emerged in response to this challenge: attempts to quantify approximate truth by formally defining the concept and the related notion of relative approximate truth; and attempts to explicate the concept informally.

The formal route was inaugurated by Popper (1972: 231–236), who defined relative orderings of “verisimilitude” (literally, “likeness to truth”) between theories in a given domain over time by means of a comparison of their true and false consequences. D. Miller (1974) and Tichý (1974) proved that there is a technical problem with this account, however, yielding the consequence that in order for theory A to have greater verisimilitude than theory B, A must be true simpliciter, which leaves the realist desideratum of explaining how strictly false theories can differ with respect to approximate truth unsatisfied (see also Oddie 1986a). Another formal account is the possible worlds approach (also called the “similarity” approach), according to which the truth conditions of a theory are identified with the set of possible worlds in which it is true, and “truth-likeness” is calculated by means of a function that measures the average or some other mathematical “distance” between the actual world and the worlds in that set, thereby facilitating orderings of theories with respect to truth-likeness (Tichý 1976, 1978; Oddie 1986b; Niiniluoto 1987, 1998; for critiques, see D. Miller 1976 and Aronson 1990). One last attempt to formalize approximate truth is the type hierarchies approach, which analyzes truth-likeness in terms of similarity relationships between nodes in tree-structured graphs of types and subtypes representing scientific concepts on the one hand, and the entities in the world they putatively represent on the other (Aronson 1990; Aronson, Harré, & Way 1994: 15–49; for a critique, see Psillos 1999: 270–273).

Less formally and perhaps more typically, realists have attempted to explicate approximate truth in qualitative terms. One common suggestion is that a theory may be considered more approximately true than one that preceded it if the earlier theory can be described as a “limiting case” of the later one. The idea of limiting cases and inter-theory relations more generally is elaborated by Post (1971; see also French & Kamminga 1993), who argues that certain heuristic principles in science yield theories that “conserve” the successful parts of their predecessors. His “General Correspondence Principle” states that later theories commonly account for the successes of their predecessors by “degenerating” into earlier theories in domains in which the earlier ones are well confirmed. Hence, for example, the often cited claim that certain equations in relativistic physics degenerate into the corresponding equations in classical physics in the limit, as velocity tends to zero. The realist may then contend that later theories offer more approximately true descriptions of the relevant subject matter, and that the ways in which they do this can be illuminated in part by studying the ways in which they build on the limiting cases represented by their predecessors. (For further takes on approximate truth, see Leplin 1981; Boyd 1990; Weston 1992; Smith 1998; Chakravartty 2010, and Northcott 2013.)

4. Antirealism: Foils for Scientific Realism

4.1 Empiricism

The term “antirealism” (or “anti-realism”) encompasses any position that is opposed to realism along one or more of the dimensions canvassed in section 1.2: the metaphysical commitment to the existence of a mind-independent reality; the semantic commitment to interpret theories literally or at face value; and the epistemological commitment to regard theories as furnishing knowledge of both observables and unobservables. As a result, and as one might expect, there are many different ways to be an antirealist, and many different positions qualify as antirealism (cf. Kitcher 2001: 161–163). In the historical development of realism, arguably the most important strains of antirealism have been varieties of empiricism which, given their emphasis on experience as a source and subject matter of knowledge, are naturally set against the idea of knowledge of unobservables. It is possible to be an empiricist more broadly speaking in a way that is consistent with realism—for example, one might endorse the idea that knowledge of the world stems from empirical investigation and contend that on this basis, one can justifiably infer certain things about unobservables. In the first half of the twentieth century, however, empiricism came predominantly in the form of varieties of “instrumentalism”: the view that theories are merely instruments for predicting observable phenomena or systematizing observation reports.

According to the best known, traditional form of instrumentalism, terms for unobservables have no meaning all by themselves; construed literally, statements involving them are not even candidates for truth or falsity (cf. a more recent proposal in Rowbottom 2011). The most influential advocates of this view were the logical empiricists (or logical positivists), including Carnap and Hempel, famously associated with the Vienna Circle group of philosophers and scientists as well as important contributors elsewhere. In order to rationalize the ubiquitous use of terms which might otherwise be taken to refer to unobservables in scientific discourse, they adopted a non-literal semantics according to which these terms acquire meaning by being associated with terms for observables (for example, “electron” might mean “white streak in a cloud chamber”), or with demonstrable laboratory procedures (a view called “operationalism”). Insuperable difficulties with this semantics led ultimately (in large measure) to the demise of logical empiricism and the growth of realism. The contrast here is not merely in semantics and epistemology: a number of logical empiricists also held the neo-Kantian view that ontological questions “external” to the frameworks for knowledge represented by theories are also meaningless (the choice of a framework is made solely on pragmatic grounds), thereby rejecting the metaphysical dimension of realism (as in Carnap 1950). (Duhem [1906] 1954 was influential with respect to instrumentalism; for a critique of logical empiricist semantics, see H. Brown 1977: ch. 3; on logical empiricism more generally, see Giere & Richardson 1997 and Richardson & Uebel 2007; on the neo-Kantian reading, see Richardson 1998 and Friedman 1999.)

Van Fraassen (1980) reinvented empiricism in the scientific context, evading many of the challenges faced by logical empiricism by adopting a realist semantics. His position, “constructive empiricism”, holds that the aim of science is empirical adequacy, where “a theory is empirically adequate exactly if what it says about the observable things and events in the world, is true” (1980: 12; p. 64 gives a more technical definition in terms of the embedding of observable structures in scientific models). Crucially, unlike logical empiricism, constructive empiricism interprets theories in precisely the same manner as realism. The antirealism of the position is due entirely to its epistemology—it recommends belief in our best theories only insofar as they describe observable phenomena, and is satisfied with an agnostic attitude regarding anything unobservable. The constructive empiricist thus recognizes claims about unobservables as true or false, but feels no need to believe or disbelieve them. In focusing on belief in the domain of the observable, the position is similar to traditional instrumentalism, and is for this reason sometimes described as a form of instrumentalism. (For elaborations of the view, see van Fraassen 1985, 2001 and Rosen 1994.) There are also affinities here with the idea of fictionalism, according to which things in the world are and behave as if our best scientific theories are true (Vaihinger [1911] 1923; Fine 1993).

4.2 Historicism

The collapse of the logical empiricist program was in part facilitated by a historical turn in the philosophy of science in the 1960s, associated with authors such as Kuhn, Feyerabend, and Hanson. Kuhn’s highly influential work, The Structure of Scientific Revolutions, played a significant role in establishing a lasting interest in a form of historicism about scientific knowledge, particularly among those interested in the nature of scientific practice. An underlying principle of the historical turn was to take the history of science and its practice seriously by furnishing descriptions of scientific knowledge in situ. Kuhn argued that the fruits of such history illuminate a recurring pattern: periods of so-called normal science, often fairly long in duration (consider, for example, the periods dominated by classical physics, or relativistic physics), punctuated by revolutions which lead scientific communities from one period of normal science into another. The implications for realism on this picture derive from Kuhn’s characterization of knowledge on either side of a revolutionary divide. Two different periods of normal science, he said, are “incommensurable” with one another, in such a way as to render the world importantly different after a revolution (the phenomenon of “world change”). (Among the many detailed studies of these topics, see Horwich 1993; Hoyningen-Huene 1993; Sankey 1994; and Bird 2000.)

The notion of incommensurability applies to (inter alia) a comparison of theories operative during different periods of normal science. Kuhn held that if two theories are incommensurable, they are not comparable in a way that would permit the judgment that one is epistemically superior to the other, because different periods of normal science are characterized by different “paradigms” (commitments to symbolic representations of the phenomena, metaphysical beliefs, values, and problem solving techniques). As a consequence, scientists in different periods of normal science generally employ different methods and standards, experience the world differently via “theory laden” perceptions, and most importantly for Kuhn (1983), differ with respect to the very meanings of their terms. This is a version of meaning holism or contextualism, according to which the meaning of a term or concept is exhausted by its connections to others within a paradigm. A change in any part of this network entails a change in meanings throughout—the term “mass”, for instance, has different meanings in the contexts of classical physics and relativistic physics. Thus, any judgment to the effect that the latter’s characterization of mass is closer to the truth, or even that the relevant theories describe the same property, is importantly confused: it equivocates between two different concepts which can only be understood in an appropriately historicized manner, from the perspectives of the paradigms in which they occur.

The changes in perception, conceptualization, and language that Kuhn associated with changes in paradigm also fuelled his notion of world change, which further extends the contrast of the historicist approach with realism. There is an important sense, Kuhn maintained, in which after a scientific revolution, scientists live in a different world. This is a famously cryptic remark in Structure ([1962] 1970: 111, 121, 150), but he (2000: 264) later gives it a neo-Kantian spin: paradigms function so as to create the reality of scientific phenomena, thereby allowing scientists to engage with this reality. On such a view, it would seem that not only the meanings but also the referents of terms are constrained by paradigmatic boundaries. And thus, reflecting an interesting parallel with neo-Kantian logical empiricism, the idea of a paradigm-transcendent world which is investigated by scientists, and about which one might have knowledge, has no obvious cognitive content. On this picture, empirical reality is structured by scientific paradigms, and this conflicts with the commitment of realism to knowledge of a mind-independent world.

4.3 Social Constructivism

One outcome of the historical turn in the philosophy of science and its emphasis on scientific practice was a focus on the complex social interactions that inevitably surround and infuse the generation of scientific knowledge. Relations between experts, their students, and the public, collaboration and competition between individuals and institutions, and social, economic, and political contexts became the subjects of an approach to studying the sciences known as the sociology of scientific knowledge, or SSK. Though in theory, a commitment to studying the sciences from a sociological perspective is interpretable in such a way as to be neutral with respect to realism (Lewens 2005; cf. Kochan 2010), in practice, most accounts of science inspired by SSK are implicitly or explicitly antirealist. This antirealism in practice stems from the common suggestion that once one appreciates the role that social factors (using this as a generic term for the sorts of interactions and contexts indicated above) play in the production of scientific knowledge, a philosophical commitment to some form of “social constructivism” is inescapable, and this latter commitment is inconsistent with various aspects of realism.

The term “social construction” refers to any knowledge-generating process in which what counts as a fact is substantively determined by social factors, and in which different social factors would likely generate facts that are inconsistent with what is actually produced. The important implication here is thus a counterfactual claim about the dependence of facts on social factors. There are numerous ways in which social determinants of facthood may be consistent with realism. For example, social factors might determine the directions and methodologies of research that are permitted, encouraged, and funded, but this by itself need not undermine a realist attitude with respect to the outputs of scientific work. Often, however, work in SSK takes the form of case studies that aim to demonstrate how particular decisions affecting scientific work were (or are) influenced by social factors which, had they been different, would have facilitated results that are inconsistent with those ultimately accepted as scientific fact. Some, including proponents of the so-called Strong Program in SSK, argue that for more general, principled reasons, such factual contingency is inevitable. (For a sample of influential approaches to social constructivism, see Latour & Woolgar [1979] 1986; Knorr-Cetina 1981; Pickering 1984; Shapin & Schaffer 1985; and Collins & Pinch 1993; on the Strong Program, see Barnes, Bloor, & Henry 1996; for a historical study of the transition from Kuhn to SSK and social constructivism, see Zammito 2004: chs. 5–7.)

By making social factors an inextricable, substantive determinant of what counts as true or false in the realm of the sciences (and elsewhere), social constructivism stands opposed to the realist contention that theories can be understood as furnishing knowledge of a mind-independent world. And as in the historicist approach, notions such as truth, reference, and ontology are here relative to particular contexts; they have no context-transcendent significance. The later work of Kuhn and Wittgenstein in particular were influential in the development of the Strong Program doctrine of “meaning finitism”, according to which the meanings of terms are conceived as social institutions: the various ways in which they are used successfully in communication within a linguistic community. This theory of meaning forms the basis of an argument to the effect that the meanings of scientific (and other) terms are products of social negotiation and need not be fixed or determinate, which further conflicts with a number of realist notions, including the idea of convergence toward true theories, improvements with respect to ontology or approximate truth, and determinate reference to mind-independent entities. The subject of neo-Kantianism thus emerges here again, though its strength in constructivist doctrines varies significantly. (For a robustly finitist view, see Kusch 2002; for a more moderate constructivism, see Putnam’s (1981: ch. 3) “internal realism” and cf. Ellis 1988).

4.4 Feminist Approaches

Feminist engagements with science are linked thematically to SSK and forms of social constructivism by their recognition of the role of social factors as determinants of scientific fact. That said, they extend the analysis in a more specific way, reflecting particular concerns about the marginalization of points of view based on gender, ethnicity, socio-economic status, and political status. Not all feminist approaches are antirealist, but nearly all are normative, offering prescriptions for revising both scientific practice and concepts such as objectivity and knowledge that have direct implications for realism. In this regard it is useful to distinguish (as originally proposed in Harding 1986) between three broad approaches. Feminist empiricism focuses on the possibility of warranted belief within scientific communities as a function of the transparency and consideration of biases associated with different points of view which enter into scientific work. Standpoint theory investigates the idea that scientific knowledge is inextricably linked to perspectives arising from differences in such points of view. Feminist postmodernism rejects traditional conceptions of universal or absolute objectivity and truth. (As one might expect, these views are not always neatly distinguishable; for some early, influential approaches, see Keller 1985; Harding 1986; Haraway 1988; Longino 1990, 2002; Alcoff & Potter 1993; and Nelson & Nelson 1996).

The notion of objectivity has a number of traditional connotations—including disinterest (detachment, lack of bias) and universality (independence from any particular perspective or viewpoint)—which are commonly associated with knowledge of a mind-independent world. Feminist critiques are almost unanimous in rejecting scientific objectivity in the sense of disinterest, offering case studies that aim to demonstrate how the presence of (for example) androcentric bias in a scientific community can lead to the acceptance of one theory at the expense of alternatives (Kourany 2010: chs. 1–3; for detailed cases, see Longino 1990: ch. 6 and Lloyd 2006). Arguably, the failure of objectivity in this sense is consistent with realism under certain conditions. For example, if the relevant bias is epistemically neutral (that is, if one’s assessment of scientific evidence is not influenced by it one way or another), then realism may remain at least one viable interpretation of the outputs of scientific work. In the more interesting case where bias is epistemically consequential, the prospects for realism are diminished, but may be enhanced by a scientific infrastructure that functions to bring it under scrutiny (by means of, for example, effective peer review, genuine consideration of minority views, etc.), thus facilitating corrective measures where appropriate. The contention that the sciences do not generally exemplify such an infrastructure is one motivation for the normativity of much feminist empiricism.

The challenge to objectivity in the sense of universality or perspective-independence can be, in some cases, more difficult to square with the possibility of realism. In a Marxist vein, some standpoint theorists argue that certain perspectives are epistemically privileged in the realm of science: viz., subjugated perspectives are epistemically privileged in comparison to dominant ones in light of the deeper insight afforded the former (just as the proletariat has a deeper knowledge of human potential than the superficial knowledge typical of those in power). Others portray epistemic privilege in a more splintered or deflationary manner, suggesting that no one point of view can be established as superior to another by any overarching standard of epistemological assessment. This view is most explicit in feminist postmodernism, which embraces a thoroughgoing relativism with respect to truth (and presumably approximate truth, scientific ontology, and other notions central to various descriptions of realism). As in the case of Strong Program SSK, truth and epistemic standards are here defined only within the context of a perspective, and thus cannot be interpreted in any context-transcendent or mind-independent manner.

4.5 Pragmatism, Quietism, and Dialectical Paralysis

It is not uncommon to hear philosophers remark that the dialogue between the forms of realism and antirealism surveyed in this article shows every symptom of a perennial philosophical dispute. The issues contested range so broadly and elicit so many competing intuitions (about which, arguably, reasonable people may disagree) that some question whether a resolution is even possible. This prognosis of potentially irresolvable dialectical complexity is relevant to a number of further views in the philosophy of science, some of which arise as direct responses to it. For example, Fine ([1986b] 1996: chs. 7–8) argues that ultimately, neither realism nor antirealism is tenable, and recommends what he calls the “natural ontological attitude” (NOA) instead (see Rouse 1988, 1991 for detailed explorations of the view). NOA is intended to comprise a neutral, common core of realist and antirealist attitudes of acceptance of our best theories. The mistake that both parties make, Fine suggests, is to add further epistemological and metaphysical diagnoses to this shared position, such as pronouncements about which aspects of scientific ontology should be viewed as real, which are proper subjects of belief, and so on. Others contend that this sort of approach to scientific knowledge is non- or anti-philosophical, and defend philosophical engagement in debates about realism (Crasnow 2000, Mcarthur 2006). Musgrave (1989) argues that the view is either empty or collapses into realism.

The idea of putting the conflict between realist and antirealist approaches to science aside is also a recurring theme in some accounts of pragmatism, and quietism. Regarding the first, Peirce ([1992] 1998, in “How to Make Our Ideas Clear”, for instance, originally published in 1878) holds that the content of a proposition should be understood in terms of (among other things) its “practical consequences” for human experience, such as implications for observation or problem-solving. For James ([1907] 1979), positive utility measured in these terms is the very marker of truth (where truth is whatever will be agreed in the ideal limit of scientific inquiry). Many of the points disputed by realists and antirealists—differences in epistemic commitment to scientific entities based on observability, for example—are effectively non-issues on this view (Almeder 2007; Misak 2010). It is nevertheless a form of antirealism on traditional readings of Peirce and James, since both suggest that truth in the pragmatist sense exhausts our conception of reality, thus running foul of the metaphysical dimension of realism. The notion of quietism is often associated with Wittgenstein’s response to philosophical problems about which, he maintained, nothing sensible can be said. This is not to say that engaging with such a problem is not to one’s taste, but rather that quite independently of one’s interest or lack thereof, the dispute itself concerns a pseudo-problem. Blackburn (2002) suggests that disputes about realism may have this character.

One last take on the putative irresolvability of debates concerning realism focuses on certain meta-philosophical commitments adopted by the interlocutors. Wylie (1986: 287), for instance, claims that

the most sophisticated positions on either side now incorporate self-justifying conceptions of the aim of philosophy and of the standards of adequacy appropriate for judging philosophical theories of science.

Different assumptions ab initio regarding what sorts of inferences are legitimate, what sorts of evidence reasonably support belief, whether there is a genuine demand for the explanation of observable phenomena in terms of underlying realities, and so on, may render some arguments between realists and antirealists question-begging. This diagnosis is arguably facilitated by van Fraassen’s (1989: 170–176, 1994: 182) intimation that neither realism nor antirealism (in his case, empiricism) is ruled out by plausible canons of rationality; each is sustained by a different conception of how much epistemic risk one should take in forming beliefs on the basis of one’s evidence. An intriguing question then emerges as to whether disputes surrounding realism and antirealism are resolvable in principle, or whether, ultimately, internally consistent and coherent formulations of these positions should be regarded as irreconcilable but nonetheless permissible interpretations of scientific knowledge (Chakravartty 2017; Forbes forthcoming).