First published Mon Feb 27, 2006; substantive revision Tue Feb 4, 2020

As a result, models have attracted philosophers’ attention and there are now sizable bodies of literature about various aspects of scientific modeling. A tangible result of philosophical engagement with models is a proliferation of model types recognized in the philosophical literature. Probing models, phenomenological models, computational models, developmental models, explanatory models, impoverished models, testing models, idealized models, theoretical models, scale models, heuristic models, caricature models, exploratory models, didactic models, fantasy models, minimal models, toy models, imaginary models, mathematical models, mechanistic models, substitute models, iconic models, formal models, analogue models, and instrumental models are but some of the notions that are used to categorize models. While at first glance this abundance is overwhelming, it can be brought under control by recognizing that these notions pertain to different problems that arise in connection with models. Models raise questions in semantics (how, if at all, do models represent?), ontology (what kind of things are models?), epistemology (how do we learn and explain with models?), and, of course, in other domains within philosophy of science.

Models are of central importance in many scientific contexts. The centrality of models such as inflationary models in cosmology, general-circulation models of the global climate, the double-helix model of DNA, evolutionary models in biology, agent-based models in the social sciences, and general-equilibrium models of markets in their respective domains is a case in point (the Other Internet Resources section at the end of this entry contains links to online resources that discuss these models). Scientists spend significant amounts of time building, testing, comparing, and revising models, and much journal space is dedicated to interpreting and discussing the implications of models.

1. Semantics: Models and Representation

Many scientific models are representational models: they represent a selected part or aspect of the world, which is the model’s target system. Standard examples are the billiard ball model of a gas, the Bohr model of the atom, the Lotka–Volterra model of predator–prey interaction, the Mundell–Fleming model of an open economy, and the scale model of a bridge.

This raises the question what it means for a model to represent a target system. This problem is rather involved and decomposes into various subproblems. For an in-depth discussion of the issue of representation, see the entry on scientific representation. At this point, rather than addressing the issue of what it means for a model to represent, we focus on a number of different kinds of representation that play important roles in the practice of model-based science, namely scale models, analogical models, idealized models, toy models, minimal models, phenomenological models, exploratory models, and models of data. These categories are not mutually exclusive, and a given model can fall into several categories at once.

Scale models. Some models are down-sized or enlarged copies of their target systems (Black 1962). A typical example is a small wooden car that is put into a wind tunnel to explore the actual car’s aerodynamic properties. The intuition is that a scale model is a naturalistic replica or a truthful mirror image of the target; for this reason, scale models are sometimes also referred to as “true models” (Achinstein 1968: Ch. 7). However, there is no such thing as a perfectly faithful scale model; faithfulness is always restricted to some respects. The wooden scale model of the car provides a faithful portrayal of the car’s shape but not of its material. And even in the respects in which a model is a faithful representation, the relation between model-properties and target-properties is usually not straightforward. When engineers use, say, a 1:100 scale model of a ship to investigate the resistance that an actual ship experiences when moving through the water, they cannot simply measure the resistance the model experiences and then multiply it with the scale. In fact, the resistance faced by the model does not translate into the resistance faced by the actual ship in a straightforward manner (that is, one cannot simply scale the water resistance with the scale of the model: the real ship need not have one hundred times the water resistance of its 1:100 model). The two quantities stand in a complicated nonlinear relation with each other, and the exact form of that relation is often highly nontrivial and emerges as the result of a thoroughgoing study of the situation (Sterrett 2006, forthcoming; Pincock forthcoming).

Analogical models. Standard examples of analogical models include the billiard ball model of a gas, the hydraulic model of an economic system, and the dumb hole model of a black hole. At the most basic level, two things are analogous if there are certain relevant similarities between them. In a classic text, Hesse (1963) distinguishes different types of analogies according to the kinds of similarity relations into which two objects enter. A simple type of analogy is one that is based on shared properties. There is an analogy between the earth and the moon based on the fact that both are large, solid, opaque, spherical bodies that receive heat and light from the sun, revolve around their axes, and gravitate towards other bodies. But sameness of properties is not a necessary condition. An analogy between two objects can also be based on relevant similarities between their properties. In this more liberal sense, we can say that there is an analogy between sound and light because echoes are similar to reflections, loudness to brightness, pitch to color, detectability by the ear to detectability by the eye, and so on.

Analogies can also be based on the sameness or resemblance of relations between parts of two systems rather than on their monadic properties. It is in this sense that the relation of a father to his children is asserted to be analogous to the relation of the state to its citizens. The analogies mentioned so far have been what Hesse calls “material analogies”. We obtain a more formal notion of analogy when we abstract from the concrete features of the systems and only focus on their formal set-up. What the analogue model then shares with its target is not a set of features, but the same pattern of abstract relationships (i.e., the same structure, where structure is understood in a formal sense). This notion of analogy is closely related to what Hesse calls “formal analogy”. Two items are related by formal analogy if they are both interpretations of the same formal calculus. For instance, there is a formal analogy between a swinging pendulum and an oscillating electric circuit because they are both described by the same mathematical equation.

A further important distinction due to Hesse is the one between positive, negative, and neutral analogies. The positive analogy between two items consists in the properties or relations they share (both gas molecules and billiard balls have mass); the negative analogy consists in the properties they do not share (billiard balls are colored, gas molecules are not); the neutral analogy comprises the properties of which it is not known (yet) whether they belong to the positive or the negative analogy (do billiard balls and molecules have the same cross section in scattering processes?). Neutral analogies play an important role in scientific research because they give rise to questions and suggest new hypotheses. For this reason several authors have emphasized the heuristic role that analogies play in theory and model construction, as well as in creative thought (Bailer-Jones and Bailer-Jones 2002; Bailer-Jones 2009: Ch. 3; Hesse 1974; Holyoak and Thagard 1995; Kroes 1989; Psillos 1995; and the essays collected in Helman 1988). See also the entry on analogy and analogical reasoning.

It has also been discussed whether using analogical models can in some cases be confirmatory in a Bayesian sense. Hesse (1974: 208–219) argues that this is possible if the analogy is a material analogy. Bartha (2010, 2013 [2019]) disagrees and argues that analogical models cannot be confirmatory in a Bayesian sense because the information encapsulated in an analogical model is part of the relevant background knowledge, which has the consequence that the posterior probability of a hypothesis about a target system cannot change as a result of observing the analogy. Analogical models can therefore only establish the plausibility of a conclusion in the sense of justifying a non-negligible prior probability assignment (Bartha 2010: §8.5).

More recently, these questions have been discussed in the context of so-called analogue experiments, which promise to provide knowledge about an experimentally inaccessible target system (e.g., a black hole) by manipulating another system, the source system (e.g., a Bose–Einstein condensate). Dardashti, Thébault, and Winsberg (2017) and Dardashti, Hartmann et al. (2019) have argued that, given certain conditions, an analogue simulation of one system by another system can confirm claims about the target system (e.g., that black holes emit Hawking radiation). See Crowther et al. (forthcoming) for a critical discussion, and also the entry on computer simulations in science.

Idealized models. Idealized models are models that involve a deliberate simplification or distortion of something complicated with the objective of making it more tractable or understandable. Frictionless planes, point masses, completely isolated systems, omniscient and fully rational agents, and markets in perfect equilibrium are well-known examples. Idealizations are a crucial means for science to cope with systems that are too difficult to study in their full complexity (Potochnik 2017).

Philosophical debates over idealization have focused on two general kinds of idealizations: so-called Aristotelian and Galilean idealizations. Aristotelian idealization amounts to “stripping away”, in our imagination, all properties from a concrete object that we believe are not relevant to the problem at hand. There is disagreement on how this is done. Jones (2005) and Godfrey-Smith (2009) offer an analysis of abstraction in terms of truth: while an abstraction remains silent about certain features or aspects of the system, it does not say anything false and still offers a true (albeit restricted) description. This allows scientists to focus on a limited set of properties in isolation. An example is a classical-mechanics model of the planetary system, which describes the position of an object as a function of time and disregards all other properties of planets. Cartwright (1989: Ch. 5), Musgrave (1981), who uses the term “negligibility assumptions”, and Mäki (1994), who speaks of the “method of isolation”, allow abstractions to say something false, for instance by neglecting a causally relevant factor.

Galilean idealizations are ones that involve deliberate distortions: physicists build models consisting of point masses moving on frictionless planes; economists assume that agents are omniscient; biologists study isolated populations; and so on. Using simplifications of this sort whenever a situation is too difficult to tackle was characteristic of Galileo’s approach to science. For this reason it is common to refer to ‘distortive’ idealizations of this kind as “Galilean idealizations” (McMullin 1985). An example for such an idealization is a model of motion on an ice rink that assumes the ice to be frictionless, when, in reality, it has low but non-zero friction.

Galilean idealizations are sometimes characterized as controlled idealizations, i.e., as ones that allow for de-idealization by successive removal of the distorting assumptions (McMullin 1985; Weisberg 2007). Thus construed, Galilean idealizations don’t cover all distortive idealizations. Batterman (2002, 2011) and Rice (2015, 2019) discuss distortive idealizations that are ineliminable in that they cannot be removed from the model without dismantling the model altogether.

What does a model involving distortions tell us about reality? Laymon (1991) formulated a theory which understands idealizations as ideal limits: imagine a series of refinements of the actual situation which approach the postulated limit, and then require that the closer the properties of a system come to the ideal limit, the closer its behavior has to come to the behavior of the system at the limit (monotonicity). If this is the case, then scientists can study the system at the limit and carry over conclusions from that system to systems distant from the limit. But these conditions need not always hold. In fact, it can happen that the limiting system does not approach the system at the limit. If this happens, we are faced with a singular limit (Berry 2002). In such cases the system at the limit can exhibit behavior that is different from the behavior of systems distant from the limit. Limits of this kind appear in a number of contexts, most notably in the theory of phase transitions in statistical mechanics. There is, however, no agreement over the correct interpretation of such limits. Batterman (2002, 2011) sees them as indicative of emergent phenomena, while Butterfield (2011a,b) sees them as compatible with reduction (see also the entries on intertheory relations in physics and scientific reduction).

Galilean and Aristotelian idealizations are not mutually exclusive, and many models exhibit both in that they take into account a narrow set of properties and distort them. Consider again the classical-mechanics model of the planetary system: the model only takes a narrow set of properties into account and distorts them, for instance by describing planets as ideal spheres with a rotation-symmetric mass distribution.

A concept that is closely related to idealization is approximation. In a broad sense, A can be called an approximation of B if A is somehow close to B. This, however, is too broad because it makes room for any likeness to qualify as an approximation. Rueger and Sharp (1998) limit approximations to quantitative closeness, and Portides (2007) frames it as an essentially mathematical concept. On that notion A is an approximation of B iff A is close to B in a specifiable mathematical sense, where the relevant sense of “close” will be given by the context. An example is the approximation of one curve with another one, which can be achieved by expanding a function into a power series and only keeping the first two or three terms. In different situations we approximate an equation with another one by letting a control parameter tend towards zero (Redhead 1980). This raises the question of how approximations are different from idealizations, which can also involve mathematical closeness. Norton (2012) sees the distinction between the two as referential: an approximation is an inexact description of the target while an idealization introduces a secondary system (real or fictitious) which stands for the target system (while being distinct from it). If we say that the period of the pendulum on the wall is roughly two seconds, then this is an approximation; if we reason about the real pendulum by assuming that the pendulum bob is a point mass and that the string is massless (i.e., if we assume that the pendulum is a so-called ideal pendulum), then we use an idealization. Separating idealizations and approximations in this way does not imply that there cannot be interesting relations between the two. For instance, an approximation can be justified by pointing out that it is the mathematical expression of an acceptable idealization (e.g., when we neglect a dissipative term in an equation of motion because we make the idealizing assumption that the system is frictionless).

Toy models. Toy models are extremely simplified and strongly distorted renderings of their targets, and often only represent a small number of causal or explanatory factors (Hartmann 1995; Reutlinger et al. 2018; Nguyen forthcoming). Typical examples are the Lotka–Volterra model in population ecology (Weisberg 2013) and the Schelling model of segregation in the social sciences (Sugden 2000). Toy models usually do not perform well in terms of prediction and empirical adequacy, and they seem to serve other epistemic goals (more on these in Section 3). This raises the question whether they should be regarded as representational at all (Luczak 2017).

Some toy models are characterized as “caricatures” (Gibbard and Varian 1978; Batterman and Rice 2014). Caricature models isolate a small number of salient characteristics of a system and distort them into an extreme case. A classic example is Akerlof’s (1970) model of the car market (“the market for lemons”), which explains the difference in price between new and used cars solely in terms of asymmetric information, thereby disregarding all other factors that may influence the prices of cars (see also Sugden 2000). However, it is controversial whether such highly idealized models can still be regarded as informative representations of their target systems. For a discussion of caricature models, in particular in economics, see Reiss (2006).

Minimal models. Minimal models are closely related to toy models in that they are also highly simplified. They are so simplified that some argue that they are non-representational: they lack any similarity, isomorphism, or resemblance relation to the world (Batterman and Rice 2014). It has been argued that many economic models are of this kind (Grüne-Yanoff 2009). Minimal economic models are also unconstrained by natural laws, and do not isolate any real factors (ibid.). And yet, minimal models help us to learn something about the world in the sense that they function as surrogates for a real system: scientists can study the model to learn something about the target. It is, however, controversial whether minimal models can assist scientists in learning something about the world if they do not represent anything (Fumagalli 2016). Minimal models that purportedly lack any similarity or representation are also used in different parts of physics to explain the macro-scale behavior of various systems whose micro-scale behavior is extremely diverse (Batterman and Rice 2014; Rice 2018, 2019; Shech 2018). Typical examples are the features of phase transitions and the flow of fluids. Proponents of minimal models argue that what provides an explanation of the macro-scale behavior of a system in these cases is not a feature that system and model have in common, but the fact that the system and the model belong to the same universality class (a class of models that exhibit the same limiting behavior even though they show very different behavior at finite scales). It is, however, controversial whether explanations of this kind are possible without reference to at least some common features (Lange 2015; Reutlinger 2017).

Phenomenological models. Phenomenological models have been defined in different, although related, ways. A common definition takes them to be models that only represent observable properties of their targets and refrain from postulating hidden mechanisms and the like (Bokulich 2011). Another approach, due to McMullin (1968), defines phenomenological models as models that are independent of theories. This, however, seems to be too strong. Many phenomenological models, while failing to be derivable from a theory, incorporate principles and laws associated with theories. The liquid-drop model of the atomic nucleus, for instance, portrays the nucleus as a liquid drop and describes it as having several properties (surface tension and charge, among others) originating in different theories (hydrodynamics and electrodynamics, respectively). Certain aspects of these theories—although usually not the full theories—are then used to determine both the static and dynamical properties of the nucleus. Finally, it is tempting to identify phenomenological models with models of a phenomenon. Here, “phenomenon” is an umbrella term covering all relatively stable and general features of the world that are interesting from a scientific point of view. The weakening of sound as a function of the distance to the source, the decay of alpha particles, the chemical reactions that take place when a piece of limestone dissolves in an acid, the growth of a population of rabbits, and the dependence of house prices on the base rate of the Federal Reserve are phenomena in this sense. For further discussion, see Bailer-Jones (2009: Ch. 7), Bogen and Woodward (1988), and the entry on theory and observation in science.

Exploratory models. Exploratory models are models which are not proposed in the first place to learn something about a specific target system or a particular experimentally established phenomenon. Exploratory models function as the starting point of further explorations in which the model is modified and refined. Gelfert (2016) points out that exploratory models can provide proofs-of-principle and suggest how-possibly explanations (2016: Ch. 4). As an example, Gelfert mentions early models in theoretical ecology, such as the Lotka–Volterra model of predator–prey interaction, which mimic the qualitative behavior of speed-up and slow-down in population growth in an environment with limited resources (2016: 80). Such models do not give an accurate account of the behavior of any actual population, but they provide the starting point for the development of more realistic models. Massimi (2019) notes that exploratory models provide modal knowledge. Fisher (2006) sees these models as tools for the examination of the features of a given theory.

Models of data. A model of data (sometimes also “data model”) is a corrected, rectified, regimented, and in many instances idealized version of the data we gain from immediate observation, the so-called raw data (Suppes 1962). Characteristically, one first eliminates errors (e.g., removes points from the record that are due to faulty observation) and then presents the data in a “neat” way, for instance by drawing a smooth curve through a set of points. These two steps are commonly referred to as “data reduction” and “curve fitting”. When we investigate, for instance, the trajectory of a certain planet, we first eliminate points that are fallacious from the observation records and then fit a smooth curve to the remaining ones. Models of data play a crucial role in confirming theories because it is the model of data, and not the often messy and complex raw data, that theories are tested against.

The construction of a model of data can be extremely complicated. It requires sophisticated statistical techniques and raises serious methodological as well as philosophical questions. How do we decide which points on the record need to be removed? And given a clean set of data, what curve do we fit to it? The first question has been dealt with mainly within the context of the philosophy of experiment (see, for instance, Galison 1997 and Staley 2004). At the heart of the latter question lies the so-called curve-fitting problem, which is that the data themselves dictate neither the form of the fitted curve nor what statistical techniques scientists should use to construct a curve. The choice and rationalization of statistical techniques is the subject matter of the philosophy of statistics, and we refer the reader to the entry Philosophy of Statistics and to Bandyopadhyay and Forster (2011) for a discussion of these issues. Further discussions of models of data can be found in Bailer-Jones (2009: Ch. 7), Brewer and Chinn (1994), Harris (2003), Hartmann (1995), Laymon (1982), Mayo (1996, 2018), and Suppes (2007).

The gathering, processing, dissemination, analysis, interpretation, and storage of data raise many important questions beyond the relatively narrow issues pertaining to models of data. Leonelli (2016, 2019) investigates the status of data in science, argues that data should be defined not by their provenance but by their evidential function, and studies how data travel between different contexts.

2. Ontology: What Are Models?

What are models? That is, what kind of object are scientists dealing with when they work with a model? A number of authors have voiced skepticism that this question has a meaningful answer, because models do not belong to a distinctive ontological category and anything can be a model (Callender and Cohen 2006; Giere 2010; Suárez 2004; Swoyer 1991; Teller 2001). Contessa (2010) replies that this is a non sequitur. Even if, from an ontological point of view, anything can be a model and the class of things that are referred to as models contains a heterogeneous collection of different things, it does not follow that it is either impossible or pointless to develop an ontology of models. This is because even if not all models are of a particular ontological kind, one can nevertheless ask to what ontological kinds the things that are de facto used as models belong. There may be several such kinds and each kind can be analyzed in its own right. What sort of objects scientists use as models has important repercussions for how models perform relevant functions such as representation and explanation, and hence this issue cannot be dismissed as “just sociology”.

The objects that commonly serve as models indeed belong to different ontological kinds: physical objects, fictional objects, abstract objects, set-theoretic structures, descriptions, equations, or combinations of some of these, are frequently referred to as models, and some models may fall into yet other classes of things. Following Contessa’s advice, the aim then is to develop an ontology for each of these. Those with an interest in ontology may see this as a goal in its own right. It pays noting, however, that the question has reverberations beyond ontology and bears on how one understands the semantics and the epistemology of models.

2.1 Physical objects

Some models are physical objects. Such models are commonly referred to as “material models”. Standard examples of models of this kind are scale models of objects like bridges and ships (see Section 1), Watson and Crick’s metal model of DNA (Schaffner 1969), Phillips and Newlyn’s hydraulic model of an economy (Morgan and Boumans 2004), the US Army Corps of Engineers’ model of the San Francisco Bay (Weisberg 2013), Kendrew’s plasticine model of myoglobin (Frigg and Nguyen 2016), and model organisms in the life sciences (Leonelli and Ankeny 2012; Leonelli 2010; Levy and Currie 2015). All these are material objects that serve as models. Material models do not give rise to ontological difficulties over and above the well-known problems in connection with objects that metaphysicians deal with, for instance concerning the nature of properties, the identity of objects, parts and wholes, and so on.

However, many models are not material models. The Bohr model of the atom, a frictionless pendulum, or an isolated population, for instance, are in the scientist’s mind rather than in the laboratory and they do not have to be physically realized and experimented upon to serve as models. These “non-physical” models raise serious ontological questions, and how they are best analyzed is debated controversially. In the remainder of this section we review some of the suggestions that have attracted attention in the recent literature on models.

2.2 Fictional objects and abstract objects

What has become known as the fiction view of models sees models as akin to the imagined objects of literary fiction—that is, as akin to fictional characters like Sherlock Holmes or fictional places like Middle Earth (Godfrey-Smith 2007). So when Bohr introduced his model of the atom he introduced a fictional object of the same kind as the object Conan Doyle introduced when he invented Sherlock Holmes. This view squares well with scientific practice, where scientists often talk about models as if they were objects and often take themselves to be describing imaginary atoms, populations, or economies. It also squares well with philosophical views that see the construction and manipulation of models as essential aspects of scientific investigation (Morgan 1999), even if models are not material objects, because these practices seem to be directed toward some kind of object.

What philosophical questions does this move solve? Fictional discourse and fictional entities face well-known philosophical questions, and one may well argue that simply likening models to fictions amounts to explaining obscurum per obscurius (for a discussion of these questions, see the entry on fictional entities). One way to counter this objection and to motivate the fiction view of models is to point to the view’s heuristic power. In this vein Frigg (2010b) identifies five specific issues that an ontology of models has to address and then notes that these issues arise in very similar ways in the discussion about fiction (the issues are the identity conditions, property attribution, the semantics of comparative statements, truth conditions, and the epistemology of imagined objects). Likening models to fiction then has heuristic value because there is a rich literature on fiction that offers a number of solutions to these issues.

Only a small portion of the options available in the extensive literature on fictions have actually been explored in the context of scientific models. Contessa (2010) formulates what he calls the “dualist account”, according to which a model is an abstract object that stands for a possible concrete object. The Rutherford model of the atom, for instance, is an abstract object that acts as a stand-in for one of the possible systems that contain an electron orbiting around a nucleus in a well-defined orbit. Barberousse and Ludwig (2009) and Frigg (2010b) take a different route and develop an account of models as fictions based on Walton’s (1990) pretense theory of fiction. According to this view the sentences of a passage of text introducing a model should be seen as a prop in a game of make-believe, and the model is the product of an act of pretense. This is an antirealist position in that it takes talk of model “objects” to be figures of speech because ultimately there are no model objects—models only live in scientists’ imaginations. Salis (forthcoming) reformulates this view to become what she calls the “the new fiction view of models”. The core difference lies in the fact that what is considered as the model are the model descriptions and their content rather than the imaginings that they prescribe. This is a realist view of models, because descriptions exist.

The fiction view is not without critics. Giere (2009), Magnani (2012), Pincock (2012), Portides (2014), and Teller (2009) reject the fiction approach and argue, in different ways, that models should not be regarded as fictions. Weisberg (2013) argues for a middle position which sees fictions as playing a heuristic role but denies that they should be regarded as forming part of a scientific model. The common core of these criticisms is that the fiction view misconstrues the epistemic standing of models. To call something a fiction, so the charge goes, is tantamount to saying that it is false, and it is unjustified to call an entire model a fiction—and thereby claim that it fails to capture how the world is—just because the model involves certain false assumptions or fictional elements. In other words, a representation isn’t automatically counted as fiction just because it has some inaccuracies. Proponents of the fiction view agree with this point but deny that the notion of fiction should be analyzed in terms of falsity. What makes a work a fiction is not its falsity (or some ratio of false to true claims): neither is everything that is said in a novel untrue (Tolstoy’s War and Peace contains many true statements about Napoleon’s Franco-Russian War), nor does every text containing false claims qualify as fiction (false news reports are just that, they are not fictions). The defining feature of a fiction is that readers are supposed to imagine the events and characters described, not that they are false (Frigg 2010a; Salis forthcoming).

Giere (1988) advocated the view that “non-physical” models are abstract entities. However, there is little agreement on the nature of abstract objects, and Hale (1988: 86–87) lists no less than twelve different possible characterizations (for a review of the available options, see the entry on abstract objects). In recent publications, Thomasson (2020) and Thomson-Jones (2020) develop what they call an “artifactualist view” of models, which is based on Thomasson’s (1999) theory of abstract artifacts. This view agrees with the pretense theory that the content of text that introduces a fictional character or a model should be understood as occurring in pretense, but at the same time insists that in producing such descriptions authors create abstract cultural artifacts that then exist independently of either the author or the readers. Artifactualism agrees with Platonism that abstract objects exist, but insists, contra Platonism, that abstract objects are brought into existence through a creative act and are not eternal. This allows the artifactualist to preserve the advantages of pretense theory while at the same time holding the realist view that fictional characters and models actually exist.

2.3 Set-theoretic structures

An influential point of view takes models to be set-theoretic structures. This position can be traced back to Suppes (1960) and is now, with slight variants, held by most proponents of the so-called semantic view of theories (for a discussion of this view, see the entry on the structure of scientific theories). There are differences between the versions of the semantic view, but with the exception of Giere (1988) all versions agree that models are structures of one sort or another (Da Costa and French 2000).

This view of models has been criticized on various grounds. One pervasive criticism is that many types of models that play an important role in science are not structures and cannot be accommodated within the structuralist view of models, which can neither account for how these models are constructed nor for how they work in the context of investigation (Cartwright 1999; Downes 1992; Morrison 1999). Examples for such models are interpretative models and mediating models, discussed later in Section 4.2. Another charge held against the set-theoretic approach is that set-theoretic structures by themselves cannot be representational models—at least if that requires them to share some structure with the target—because the ascription of a structure to a target system which forms part of the physical world relies on a substantive (non-structural) description of the target, which goes beyond what the structuralist approach can afford (Nguyen and Frigg forthcoming).

2.4 Descriptions and equations

A time-honored position has it that a model is a stylized description of a target system. It has been argued that this is what scientists display in papers and textbooks when they present a model (Achinstein 1968; Black 1962). This view has not been subject to explicit criticism. However, some of the criticisms that have been marshaled against the so-called syntactic view of theories equally threaten a linguistic understanding of models (for a discussion of this view, see the entry on the structure of scientific theories). First, a standard criticism of the syntactic view is that by associating a theory with a particular formulation, the view misconstrues theory identity because any change in the formulation results in a new theory (Suppe 2000). A view that associates models with descriptions would seem to be open to the same criticism. Second, models have different properties than descriptions: the Newtonian model of the solar system consists of orbiting spheres, but it makes no sense to say this about its description. Conversely, descriptions have properties that models do not have: a description can be written in English and consist of 517 words, but the same cannot be said of a model. One way around these difficulties is to associate the model with the content of a description rather than with the description itself. For a discussion of a position on models that builds on the content of a description, see Salis (forthcoming).

A contemporary version of descriptivism is Levy’s (2012, 2015) and Toon’s (2012) so-called direct-representation view. This view shares with the fiction view of models (Section 2.2) the reliance on Walton’s pretense theory, but uses it in a different way. The main difference is that the views discussed earlier see modeling as introducing a vehicle of representation, the model, that is distinct from the target, and they see the problem as elucidating what kind of thing the model is. On the direct-representation view there are no models distinct from the target; there are only model-descriptions and targets, with no models in-between them. Modeling, on this view, consists in providing an imaginative description of real things. A model-description prescribes imaginings about the real system; the ideal pendulum, for instance, prescribes model-users to imagine the real spring as perfectly elastic and the bob as a point mass. This approach avoids the above problems because the identity conditions for models are given by the conditions for games of make-believe (and not by the syntax of a description) and property ascriptions take place in pretense. There are, however, questions about how this account deals with models that have no target (like models of the ether or four-sex populations), and about how models thus understood deal with idealizations. For a discussion of these points, see Frigg and Nguyen (2016), Poznic (2016), and Salis (forthcoming).

A closely related approach sees models as equations. This is a version of the view that models are descriptions, because equations are syntactic items that describe a mathematical structure. The issues that this view faces are similar to the ones we have already encountered: First, one can describe the same situation using different kinds of coordinates and as a result obtain different equations but without thereby also obtaining a different model. Second, the model and the equation have different properties. A pendulum contains a massless string, but the equation describing its motion does not; and an equation may be inhomogeneous, but the system it describes is not. It is an open question whether these issues can be avoided by appeal to a pretense account.

3. Epistemology: The Cognitive Functions of Models

One of the main reasons why models play such an important role in science is that they perform a number of cognitive functions. For example, models are vehicles for learning about the world. Significant parts of scientific investigation are carried out on models rather than on reality itself because by studying a model we can discover features of, and ascertain facts about, the system the model stands for: models allow for “surrogative reasoning” (Swoyer 1991). For instance, we study the nature of the hydrogen atom, the dynamics of a population, or the behavior of a polymer by studying their respective models. This cognitive function of models has been widely acknowledged in the literature, and some even suggest that models give rise to a new style of reasoning, “model-based reasoning”, according to which “inferences are made by means of creating models and manipulating, adapting, and evaluating them” (Nersessian 2010: 12; see also Magnani, Nersessian, and Thagard 1999; Magnani and Nersessian 2002; and Magnani and Casadio 2016).

3.1 Learning about models

Learning about a model happens in two places: in the construction of the model and in its manipulation (Morgan 1999). There are no fixed rules or recipes for model building and so the very activity of figuring out what fits together, and how, affords an opportunity to learn about the model. Once the model is built, we do not learn about its properties by looking at it; we have to use and manipulate the model in order to elicit its secrets.

Depending on what kind of model we are dealing with, building and manipulating a model amount to different activities demanding different methodologies. Material models seem to be straightforward because they are used in common experimental contexts (e.g., we put the model of a car in the wind tunnel and measure its air resistance). Hence, as far as learning about the model is concerned, material models do not give rise to questions that go beyond questions concerning experimentation more generally.

Not so with fictional and abstract models. What constraints are there to the construction of fictional and abstract models, and how do we manipulate them? A natural response seems to be that we do this by performing a thought experiment. Different authors (e.g., Brown 1991; Gendler 2000; Norton 1991; Reiss 2003; Sorensen 1992) have explored this line of argument, but they have reached very different and often conflicting conclusions about how thought experiments are performed and what the status of their outcomes is (for details, see the entry on thought experiments).

An important class of models is computational in nature. For some models it is possible to derive results or solve equations of a mathematical model analytically. But quite often this is not the case. It is at this point that computers have a great impact, because they allow us to solve problems that are otherwise intractable. Hence, computational methods provide us with knowledge about (the consequences of) a model where analytical methods remain silent. Many parts of current research in both the natural and social sciences rely on computer simulations, which help scientists to explore the consequences of models that cannot be investigated otherwise. The formation and development of stars and galaxies, the dynamics of high-energy heavy-ion reactions, the evolution of life, outbreaks of wars, the progression of an economy, moral behavior, and the consequences of decision procedures in an organization are explored with computer simulations, to mention only a few examples.

Computer simulations are also heuristically important. They can suggest new theories, models, and hypotheses, for example, based on a systematic exploration of a model’s parameter space (Hartmann 1996). But computer simulations also bear methodological perils. For example, they may provide misleading results because, due to the discrete nature of the calculations carried out on a digital computer, they only allow for the exploration of a part of the full parameter space, and this subspace need not reflect every important feature of the model. The severity of this problem is somewhat mitigated by the increasing power of modern computers. But the availability of more computational power can also have adverse effects: it may encourage scientists to swiftly come up with increasingly complex but conceptually premature models, involving poorly understood assumptions or mechanisms and too many additional adjustable parameters (for a discussion of a related problem in the social sciences, see Braun and Saam 2015: Ch. 3). This can lead to an increase in empirical adequacy—which may be welcome for certain forecasting tasks—but not necessarily to a better understanding of the underlying mechanisms. As a result, the use of computer simulations can change the weight we assign to the various goals of science. Finally, the availability of computer power may seduce scientists into making calculations that do not have the degree of trustworthiness one would expect them to have. This happens, for instance, when computers are used to propagate probability distributions forward in time, which can turn out to be misleading (see Frigg et al. 2014). So it is important not to be carried away by the means that new powerful computers offer and lose sight of the actual goals of research. For a discussion of further issues in connection with computer simulations, we refer the reader to the entry on computer simulations in science.

3.2 Learning about target systems

Once we have knowledge about the model, this knowledge has to be “translated” into knowledge about the target system. It is at this point that the representational function of models becomes important again: if a model represents, then it can instruct us about reality because (at least some of) the model’s parts or aspects have corresponding parts or aspects in the world. But if learning is connected to representation and if there are different kinds of representations (analogies, idealizations, etc.), then there are also different kinds of learning. If, for instance, we have a model we take to be a realistic depiction, the transfer of knowledge from the model to the target is accomplished in a different manner than when we deal with an analogue, or a model that involves idealizing assumptions. For a discussion of the different ways in which the representational function of models can be exploited to learn about the target, we refer the reader to the entry Scientific Representation.

3.3 Explaining with models

Some models explain. But how can they fulfill this function given that they typically involve idealizations? Do these models explain despite or because of the idealizations they involve? Does an explanatory use of models presuppose that they represent, or can non-representational models also explain? And what kind of explanation do models provide?

There is a long tradition requesting that the explanans of a scientific explanation must be true. We find this requirement in the deductive-nomological model (Hempel 1965) as well as in the more recent literature. For instance, Strevens (2008: 297) claims that “no causal account of explanation … allows nonveridical models to explain”. For further discussions, see also Colombo et al. (2015).

Authors working in this tradition deny that idealizations make a positive contribution to explanation and explore how models can explain despite being idealized. McMullin (1968, 1985) argues that a causal explanation based on an idealized model leaves out only features which are irrelevant for the respective explanatory task (see also Salmon 1984 and Piccinini and Craver 2011 for a discussion of mechanism sketches). Friedman (1974) argues that a more realistic (and hence less idealized) model explains better on the unification account. The idea is that idealizations can (at least in principle) be de-idealized (for a critical discussion of this claim in the context of the debate about scientific explanations, see Batterman 2002; Bokulich 2011; Morrison 2005, 2009; Jebeile and Kennedy 2015; and Rice 2015). Strevens (2008) argues that an explanatory causal model has to provide an accurate representation of the relevant causal relationships or processes which the model shares with the target system. The idealized assumptions of a model do not make a difference for the phenomenon under consideration and are therefore explanatorily irrelevant. In contrast, both Potochnik (2017) and Rice (2015) argue that models that explain can directly distort many difference-making causes.

According to Woodward’s (2003) theory, models are tools to find out about the causal relations that hold between certain facts or processes, and it is these relations that do the explanatory work. More specifically, explanations provide information about patterns of counterfactual dependence between the explanans and the explanandum which

enable us to see what sort of difference it would have made for the explanandum if the factors cited in the explanans had been different in various possible ways. (Woodward 2003: 11)

Accounts of causal explanation have also led to various claims about how idealized models can provide explanations, exploring to what extent idealization allows for the misrepresentation of irrelevant causal factors by the explanatory model (Elgin and Sober 2002; Strevens 2004, 2008; Potochnik 2007; Weisberg 2007, 2013). However, having the causally relevant features in common with real systems continues to play the essential role in showing how idealized models can be explanatory.

But is it really the truth of the explanans that makes the model explanatory? Other authors pursue a more radical line and argue that false models explain not only despite their falsity, but in fact because of their falsity. Cartwright (1983: 44) maintains that “the truth doesn’t explain much”. In her so-called “simulacrum account of explanation”, she suggests that we explain a phenomenon by constructing a model that fits the phenomenon into the basic framework of a grand theory (1983: Ch. 8). On this account, the model itself is the explanation we seek. This squares well with basic scientific intuitions, but it leaves us with the question of what notion of explanation is at work (see also Elgin and Sober 2002) and of what explanatory function idealizations play in model explanations (Rice 2018, 2019). Wimsatt (2007: Ch. 6) stresses the role of false models as means to arrive at true theories. Batterman and Rice (2014) argue that models explain because the details that characterize specific systems do not matter for the explanation. Bokulich (2008, 2009, 2011, 2012) pursues a similar line of reasoning and sees the explanatory power of models as being closely related to their fictional nature. Bokulich (2009) and Kennedy (2012) present non-representational accounts of model explanation (see also Jebeile and Kennedy 2015). Reiss (2012) and Woody (2004) provide general discussions of the relationship between representation and explanation.

3.4 Understanding with models

Many authors have pointed out that understanding is one of the central goals of science (see, for instance, de Regt 2017; Elgin 2017; Khalifa 2017; Potochnik 2017). In some cases, we want to understand a certain phenomenon (e.g., why the sky is blue); in other cases, we want to understand a specific scientific theory (e.g., quantum mechanics) that accounts for a phenomenon in question. Sometimes we gain understanding of a phenomenon by understanding the corresponding theory or model. For instance, Maxwell’s theory of electromagnetism helps us understand why the sky is blue. It is, however, controversial whether understanding a phenomenon always presupposes an understanding of the corresponding theory (de Regt 2009: 26).

Although there are many different ways of gaining understanding, models and the activity of scientific modeling are of particular importance here (de Regt et al. 2009; Morrison 2009; Potochnik 2017; Rice 2016). This insight can be traced back at least to Lord Kelvin who, in his famous 1884 Baltimore Lectures on Molecular Dynamics and the Wave Theory of Light, maintained that “the test of ‘Do we or do we not understand a particular subject in physics?’ is ‘Can we make a mechanical model of it?’” (Kelvin 1884 [1987: 111]; see also Bailer-Jones 2009: Ch. 2; and de Regt 2017: Ch. 6).

But why do models play such a crucial role in the understanding of a subject matter? Elgin (2017) argues that this is not despite, but because, of models being literally false. She views false models as “felicitous falsehoods” that occupy center stage in the epistemology of science, and mentions the ideal-gas model in statistical mechanics and the Hardy–Weinberg model in genetics as examples for literally false models that are central to their respective disciplines. Understanding is holistic and it concerns a topic, a discipline, or a subject matter, rather than isolated claims or facts. Gaining understanding of a context means to have

an epistemic commitment to a comprehensive, systematically linked body of information that is grounded in fact, is duly responsive to reasons or evidence, and enables nontrivial inference, argument, and perhaps action regarding the topic the information pertains to (Elgin 2017: 44)

and models can play a crucial role in the pursuit of these epistemic commitments. For a discussion of Elgin’s account of models and understanding, see Baumberger and Brun (2017) and Frigg and Nguyen (forthcoming).

Elgin (2017), Lipton (2009), and Rice (2016) all argue that models can be used to understand independently of their ability to provide an explanation. Other authors, among them Strevens (2008, 2013), argue that understanding presupposes a scientific explanation and that

an individual has scientific understanding of a phenomenon just in case they grasp a correct scientific explanation of that phenomenon. (Strevens 2013: 510; see, however, Sullivan and Khalifa 2019)

On this account, understanding consists in a particular form of epistemic access an individual scientist has to an explanation. For Strevens this aspect is “grasping”, while for de Regt (2017) it is “intelligibility”. It is important to note that both Strevens and de Regt hold that such “subjective” aspects are a worthy topic for investigations in the philosophy of science. This contrasts with the traditional view (see, e.g., Hempel 1965) that delegates them to the realm of psychology. See Friedman (1974), Trout (2002), and Reutlinger et al. (2018) for further discussions of understanding.

3.5 Other cognitive functions

Besides the functions already mentioned, it has been emphasized variously that models perform a number of other cognitive functions. Knuuttila (2005, 2011) argues that the epistemic value of models is not limited to their representational function, and develops an account that views models as epistemic artifacts which allow us to gather knowledge in diverse ways. Nersessian (1999, 2010) stresses the role of analogue models in concept-formation and other cognitive processes. Hartmann (1995) and Leplin (1980) discuss models as tools for theory construction and emphasize their heuristic and pedagogical value. Epstein (2008) lists a number of specific functions of models in the social sciences. Peschard (2011) investigates the way in which models may be used to construct other models and generate new target systems. And Isaac (2013) discusses non-explanatory uses of models which do not rely on their representational capacities.

4. Models and Theory

An important question concerns the relation between models and theories. There is a full spectrum of positions ranging from models being subordinate to theories to models being independent of theories.

4.1 Models as subsidiaries to theory

To discuss the relation between models and theories in science it is helpful to briefly recapitulate the notions of a model and of a theory in logic. A theory is taken to be a (usually deductively closed) set of sentences in a formal language. A model is a structure (in the sense introduced in Section 2.3) that makes all sentences of a theory true when its symbols are interpreted as referring to objects, relations, or functions of a structure. The structure is a model of the theory in the sense that it is correctly described by the theory (see Bell and Machover 1977 or Hodges 1997 for details). Logical models are sometimes also referred to as “models of theory” to indicate that they are interpretations of an abstract formal system.

Models in science sometimes carry over from logic the idea of being the interpretation of an abstract calculus (Hesse 1967). This is salient in physics, where general laws—such as Newton’s equation of motion—lie at the heart of a theory. These laws are applied to a particular system—e.g., a pendulum—by choosing a special force function, making assumptions about the mass distribution of the pendulum etc. The resulting model then is an interpretation (or realization) of the general law.

It is important to keep the notions of a logical and a representational model separate (Thomson-Jones 2006): these are distinct concepts. Something can be a logical model without being a representational model, and vice versa. This, however, does not mean that something cannot be a model in both senses at once. In fact, as Hesse (1967) points out, many models in science are both logical and representational models. Newton’s model of planetary motion is a case in point: the model, consisting of two homogeneous perfect spheres located in otherwise empty space that attract each other gravitationally, is simultaneously a logical model (because it makes the axioms of Newtonian mechanics true when they are interpreted as referring to the model) and a representational model (because it represents the real sun and earth).

There are two main conceptions of scientific theories, the so-called syntactic view of theories and the so-called semantic view of theories (see the entry on the structure of scientific theories). On both conceptions models play a subsidiary role to theories, albeit in very different ways. The syntactic view of theories (see entry section on the syntactic view) retains the logical notions of a model and a theory. It construes a theory as a set of sentences in an axiomatized logical system, and a model as an alternative interpretation of a certain calculus (Braithwaite 1953; Campbell 1920 [1957]; Nagel 1961; Spector 1965). If, for instance, we take the mathematics used in the kinetic theory of gases and reinterpret the terms of this calculus in a way that makes them refer to billiard balls, the billiard balls are a model of the kinetic theory of gases in the sense that all sentences of the theory come out true. The model is meant to be something that we are familiar with, and it serves the purpose of making an abstract formal calculus more palpable. A given theory can have different models, and which model we choose depends both on our aims and our background knowledge. Proponents of the syntactic view disagree about the importance of models. Carnap and Hempel thought that models only serve a pedagogic or aesthetic purpose and are ultimately dispensable because all relevant information is contained in the theory (Carnap 1938; Hempel 1965; see also Bailer-Jones 1999). Nagel (1961) and Braithwaite (1953), on the other hand, emphasize the heuristic role of models, and Schaffner (1969) submits that theoretical terms get at least part of their meaning from models.

The semantic view of theories (see entry section on the semantic view) dispenses with sentences in an axiomatized logical system and construes a theory as a family of models. On this view, a theory literally is a class, cluster, or family of models—models are the building blocks of which scientific theories are made up. Different versions of the semantic view work with different notions of a model, but, as noted in Section 2.3, in the semantic view models are mostly construed as set-theoretic structures. For a discussion of the different options, we refer the reader to the relevant entry in this encyclopedia (linked at the beginning of this paragraph).

4.2 Models as independent from theories

In both the syntactic and the semantic view of theories models are seen as subordinate to theory and as playing no role outside the context of a theory. This vision of models has been challenged in a number of ways, with authors pointing out that models enjoy various degrees of freedom from theory and function autonomously in many contexts. Independence can take many forms, and large parts of the literature on models are concerned with investigating various forms of independence.

Models as completely independent of theory. The most radical departure from a theory-centered analysis of models is the realization that there are models that are completely independent from any theory. An example of such a model is the Lotka–Volterra model. The model describes the interaction of two populations: a population of predators and one of prey animals (Weisberg 2013). The model was constructed using only relatively commonsensical assumptions about predators and prey and the mathematics of differential equations. There was no appeal to a theory of predator–prey interactions or a theory of population growth, and the model is independent of theories about its subject matter. If a model is constructed in a domain where no theory is available, then the model is sometimes referred to as a “substitute model” (Groenewold 1961), because the model substitutes a theory.

Models as a means to explore theory. Models can also be used to explore theories (Morgan and Morrison 1999). An obvious way in which this can happen is when a model is a logical model of a theory (see Section 4.1). A logical model is a set of objects and properties that make a formal sentence true, and so one can see in the model how the axioms of the theory play out in a particular setting and what kinds of behavior they dictate. But not all models that are used to explore theories are logical models, and models can represent features of theories in other ways. As an example, consider chaos theory. The equations of non-linear systems, such as those describing the three-body problem, have solutions that are too complex to study with paper-and-pencil methods, and even computer simulations are limited in various ways. Abstract considerations about the qualitative behavior of solutions show that there is a mechanism that has been dubbed “stretching and folding” (see the entry Chaos). To obtain an idea of the complexity of the dynamics exhibiting stretching and folding, Smale proposed to study a simple model of the flow—now known as the “horseshoe map” (Tabor 1989)—which provides important insights into the nature of stretching and folding. Other examples of models of that kind are the Kac ring model that is used to study equilibrium properties of systems in statistical mechanics (Lavis 2008) and Norton’s dome in Newtonian mechanics (Norton 2003).

Models as complements of theories. A theory may be incompletely specified in the sense that it only imposes certain general constraints but remains silent about the details of concrete situations, which are provided by a model (Redhead 1980). A special case of this situation is when a qualitative theory is known and the model introduces quantitative measures (Apostel 1961). Redhead’s example of a theory that is underdetermined in this way is axiomatic quantum field theory, which only imposes certain general constraints on quantum fields but does not provide an account of particular fields. Harré (2004) notes that models can complement theories by providing mechanisms for processes that are left unspecified in the theory even though they are responsible for bringing about the observed phenomena.

Theories may be too complicated to handle. In such cases a model can complement a theory by providing a simplified version of the theoretical scenario that allows for a solution. Quantum chromodynamics, for instance, cannot easily be used to investigate the physics of an atomic nucleus even though it is the relevant fundamental theory. To get around this difficulty, physicists construct tractable phenomenological models (such as the MIT bag model) which effectively describe the relevant degrees of freedom of the system under consideration (Hartmann 1999, 2001). The advantage of these models is that they yield results where theories remain silent. Their drawback is that it is often not clear how to understand the relationship between the model and the theory, as the two are, strictly speaking, contradictory.

Models as preliminary theories. The notion of a model as a substitute for a theory is closely related to the notion of a developmental model. This term was coined by Leplin (1980), who pointed out how useful models were in the development of early quantum theory, and it is now used as an umbrella notion covering cases in which models are some sort of a preliminary exercise to theory.

Also closely related is the notion of a probing model (or “study model”). Models of this kind do not perform a representational function and are not expected to instruct us about anything beyond the model itself. The purpose of these models is to test new theoretical tools that are used later on to build representational models. In field theory, for instance, the so-called φ4-model was studied extensively, not because it was believed to represent anything real, but because it served several heuristic functions: the simplicity of the φ4-model allowed physicists to “get a feeling” for what quantum field theories are like and to extract some general features that this simple model shared with more complicated ones. Physicists could study complicated techniques such as renormalization in a simple setting, and it was possible to get acquainted with important mechanisms—in this case symmetry-breaking—that could later be used in different contexts (Hartmann 1995). This is true not only for physics. As Wimsatt (1987, 2007) points out, a false model in genetics can perform many useful functions, among them the following: the false model can help answering questions about more realistic models, provide an arena for answering questions about properties of more complex models, “factor out” phenomena that would not otherwise be seen, serve as a limiting case of a more general model (or two false models may define the extremes of a continuum of cases on which the real case is supposed to lie), or lead to the identification of relevant variables and the estimation of their values.

Interpretative models. Cartwright (1983, 1999) argues that models do not only aid the application of theories that are somehow incomplete; she claims that models are also involved whenever a theory with an overarching mathematical structure is applied. The main theories in physics—classical mechanics, electrodynamics, quantum mechanics, and so on—fall into this category. Theories of that kind are formulated in terms of abstract concepts that need to be concretized for the theory to provide a description of the target system, and concretizing the relevant concepts, idealized objects and processes are introduced. For instance, when applying classical mechanics, the abstract concept of force has to be replaced with a concrete force such as gravity. To obtain tractable equations, this procedure has to be applied to a simplified scenario, for instance that of two perfectly spherical and homogeneous planets in otherwise empty space, rather than to reality in its full complexity. The result is an interpretative model, which grounds the application of mathematical theories to real-world targets. Such models are independent from theory in that the theory does not determine their form, and yet they are necessary for the application of the theory to a concrete problem.

Models as mediators. The relation between models and theories can be complicated and disorderly. The contributors to a programmatic collection of essays edited by Morgan and Morrison (1999) rally around the idea that models are instruments that mediate between theories and the world. Models are “autonomous agents” in that they are independent from both theories and their target systems, and it is this independence that allows them to mediate between the two. Theories do not provide us with algorithms for the construction of a model; they are not “vending machines” into which one can insert a problem and a model pops out (Cartwright 1999). The construction of a model often requires detailed knowledge about materials, approximation schemes, and the setup, and these are not provided by the corresponding theory. Furthermore, the inner workings of a model are often driven by a number of different theories working cooperatively. In contemporary climate modeling, for instance, elements of different theories—among them fluid dynamics, thermodynamics, electromagnetism—are put to work cooperatively. What delivers the results is not the stringent application of one theory, but the voices of different theories when put to use in chorus with each other in one model.

In complex cases like the study of a laser system or the global climate, models and theories can get so entangled that it becomes unclear where a line between the two should be drawn: where does the model end and the theory begin? This is not only a problem for philosophical analysis; it also arises in scientific practice. Bailer-Jones (2002) interviewed a group of physicists about their understanding of models and their relation to theories, and reports widely diverging views: (i) there is no substantive difference between model and theory; (ii) models become theories when their degree of confirmation increases; (iii) models contain simplifications and omissions, while theories are accurate and complete; (iv) theories are more general than models, and modeling is about applying general theories to specific cases. The first suggestion seems to be too radical to do justice to many aspects of practice, where a distinction between models and theories is clearly made. The second view is in line with common parlance, where the terms “model” and “theory” are sometimes used to express someone’s attitude towards a particular hypothesis. The phrase “it’s just a model” indicates that the hypothesis at stake is asserted only tentatively or is even known to be false, while something is awarded the label “theory” if it has acquired some degree of general acceptance. However, this use of “model” is different from the uses we have seen in Sections 1 to 3 and is therefore of no use if we aim to understand the relation between scientific models and theories (and, incidentally, one can equally dismiss speculative claims as being “just a theory”). The third proposal is correct in associating models with idealizations and simplifications, but it overshoots by restricting this to models; in fact, also theories can contain idealizations and simplifications. The fourth view seems closely aligned with interpretative models and the idea that models are mediators, but being more general is a gradual notion and hence does not provide a clear-cut criterion to distinguish between theories and models.

5. Models and Other Debates in the Philosophy of Science

The debate over scientific models has important repercussions for other issues in the philosophy of science (for a historical account of the philosophical discussion about models, see Bailer-Jones 1999). Traditionally, the debates over, say, scientific realism, reductionism, and laws of nature were couched in terms of theories, because theories were seen as the main carriers of scientific knowledge. Once models are acknowledged as occupying an important place in the edifice of science, these issues have to be reconsidered with a focus on models. The question is whether, and if so how, discussions of these issues change when we shift focus from theories to models. Up to now, no comprehensive model-based account of any of these issues has emerged, but models have left important traces in the discussions of these topics.

5.1 Models, realism, and laws of nature

As we have seen in Section 1, models typically provide a distorted representation of their targets. If one sees science as primarily model-based, this could be taken to suggest an antirealist interpretation of science. Realists, however, deny that the presence of idealizations in models renders a realist approach to science impossible and point out that a good model, while not literally true, is usually at least approximately true, and/or that it can be improved by de-idealization (Laymon 1985; McMullin 1985; Nowak 1979; Brzezinski and Nowak 1992).

Apart from the usual worries about the elusiveness of the notion of approximate truth (for a discussion, see the entry on truthlikeness), antirealists have taken issue with this reply for two (related) reasons. First, as Cartwright (1989) points out, there is no reason to assume that one can always improve a model by adding de-idealizing corrections. Second, it seems that de-idealization is not in accordance with scientific practice because it is unusual that scientists invest work in repeatedly de-idealizing an existing model. Rather, they shift to a different modeling framework once the adjustments to be made get too involved (Hartmann 1998). The various models of the atomic nucleus are a case in point: once it was realized that shell effects are important to understand various subatomic phenomena, the (collective) liquid-drop model was put aside and the (single-particle) shell model was developed to account for the corresponding findings. A further difficulty with de-idealization is that most idealizations are not “controlled”. For example, it is not clear in what way one could de-idealize the MIT bag model to eventually arrive at quantum chromodynamics, the supposedly correct underlying theory.

A further antirealist argument, the “incompatible-models argument”, takes as its starting point the observation that scientists often successfully use several incompatible models of one and the same target system for predictive purposes (Morrison 2000). These models seemingly contradict each other, as they ascribe different properties to the same target system. In nuclear physics, for instance, the liquid-drop model explores the analogy of the atomic nucleus with a (charged) fluid drop, while the shell model describes nuclear properties in terms of the properties of protons and neutrons, the constituents of an atomic nucleus. This practice appears to cause a problem for scientific realism: Realists typically hold that there is a close connection between the predictive success of a theory and its being at least approximately true. But if several models of the same system are predictively successful and if these models are mutually inconsistent, then it is difficult to maintain that they are all approximately true.

Realists can react to this argument in various ways. First, they can challenge the claim that the models in question are indeed predictively successful. If the models are not good predictors, then the argument is blocked. Second, they can defend a version of “perspectival realism” (Giere 2006; Massimi 2017; Rueger 2005). Proponents of this position (which is sometimes also called “perspectivism”) situate it somewhere between “standard” scientific realism and antirealism, and where exactly the right middle position lies is the subject matter of active debate (Massimi 2018a,b; Saatsi 2016; Teller 2018; and the contributions to Massimi and McCoy 2019). Third, realists can deny that there is a problem in the first place, because scientific models, which are always idealized and therefore strictly speaking false, are just the wrong vehicle to make a point about realism (which should be discussed in terms of theories).

A particular focal point of the realism debate are laws of nature, where the questions arise what laws are and whether they are truthfully reflected in our scientific representations. According to the two currently dominant accounts, the best-systems approach and the necessitarian approach, laws of nature are understood to be universal in scope, meaning that they apply to everything that there is in the world (for discussion of laws, see the entry on laws of nature). This take on laws does not seem to sit well with a view that places models at the center of scientific research. What role do general laws play in science if it is models that represent what is happening in the world? And how are models and laws related?

One possible response to these questions is to argue that laws of nature govern entities and processes in a model rather than in the world. Fundamental laws, on this approach, do not state facts about the world but hold true of entities and processes in the model. This view has been advocated in different variants: Cartwright (1983) argues that all laws are ceteris paribus laws. Cartwright (1999) makes use of “capacities” (which she considers to be prior to laws) and introduces the notion of a “nomological machine”. This is

a fixed (enough) arrangement of components, or factors, with stable (enough) capacities that in the right sort of stable (enough) environment will, with repeated operation, give rise to the kind of regular behavior that we represent in our scientific laws. (1999: 50; see also the entry on ceteris paribus laws)

Giere (1999) argues that the laws of a theory are better thought of, not as encoding general truths about the world, but rather as open-ended statements that can be filled in various ways in the process of building more specific scientific models. Similar positions have also been defended by Teller (2001) and van Fraassen (1989).

5.2 Models and reductionism

The multiple-models problem mentioned in Section 5.1 also raises the question of how different models are related. Evidently, multiple models for the same target system do not generally stand in a deductive relationship, as they often contradict each other. Some (Cartwright 1999; Hacking 1983) have suggested a picture of science according to which there are no systematic relations that hold between different models. Some models are tied together because they represent the same target system, but this does not imply that they enter into any further relationships (deductive or otherwise). We are confronted with a patchwork of models, all of which hold ceteris paribus in their specific domains of applicability.

Some argue that this picture is at least partially incorrect because there are various interesting relations that hold between different models or theories. These relations range from thoroughgoing reductive relations (Scheibe 1997, 1999, 2001: esp. Chs. V.23 and V.24) and controlled approximations over singular limit relations (Batterman 2001 [2016]) to structural relations (Gähde 1997) and rather loose relations called “stories” (Hartmann 1999; see also Bokulich 2003; Teller 2002; and the essays collected in Part III of Hartmann et al. 2008). These suggestions have been made on the basis of case studies, and it remains to be seen whether a more general account of these relations can be given and whether a deeper justification for them can be provided, for instance, within a Bayesian framework (first steps towards a Bayesian understanding of reductive relations can be found in Dizadji-Bahmani et al. 2011; Liefke and Hartmann 2018; and Tešić 2019).

Models also figure in the debate about reduction and emergence in physics. Here, some authors argue that the modern approach to renormalization challenges Nagel’s (1961) model of reduction or the broader doctrine of reductions (for a critical discussion, see, for instance, Batterman 2002, 2010, 2011; Morrison 2012; and Saatsi and Reutlinger 2018). Dizadji-Bahmani et al. (2010) provide a defense of the Nagel–Schaffner model of reduction, and Butterfield (2011a,b, 2014) argues that renormalization is consistent with Nagelian reduction. Palacios (2019) shows that phase transitions are compatible with reductionism, and Hartmann (2001) argues that the effective-field-theories research program is consistent with reductionism (see also Bain 2013 and Franklin forthcoming). Rosaler (2015) argues for a “local” form of reduction which sees the fundamental relation of reduction holding between models, not theories, which is, however, compatible with the Nagel–Schaffner model of reduction. See also the entries on intertheory relations in physics and scientific reduction.

In the social sciences, agent-based models (ABMs) are increasingly used (Klein et al. 2018). These models show how surprisingly complex behavioral patterns at the macro-scale can emerge from a small number of simple behavioral rules for the individual agents and their interactions. This raises questions similar to the questions mentioned above about reduction and emergence in physics, but so far one only finds scattered remarks about reduction in the literature. See Weisberg and Muldoon (2009) and Zollman (2007) for the application of ABMs to the epistemology and the social structure of science, and Colyvan (2013) for a discussion of methodological questions raised by normative models in general.