All four conditions require detailed discussion, which is outside the scope of this article. In this section, I provide preliminary characterizations and guidelines for a more thorough account. Any such account should meet three desiderata: It should acknowledge that the conditions admit of degrees (graduality), allow for idealizations (idealization), and apply to all kinds of theories and theory-like representations (diversity). Moreover, I defend the claim that rightness, grasping and justification are evaluative dimensions and that commitment is a necessary condition for understanding. My strategy will be to show that even those who reject rightness, grasping and justification as necessary conditions can and should accept them as evaluative dimensions.

Rightness

The mentioned desiderata cannot be met by an account of rightness that construes the theory-world relation in terms of the model-theoretic relation of isomorphism, as suggested by proponents of the semantic view of theories (e.g., van Fraassen 1980). Isomorphism accounts fail to meet the graduality and the idealization desiderata since models are either isomorphic to their targets, or they are not. Partial isomorphism accounts (e.g., da Costa and French 2003) do better. They divide a model structure into different substructures, and claim that a model is partially isomorphic to its target when a substructure of the model is isomorphic to a substructure of the target. Partial isomorphism comes in degrees, depending on the relative size of the isomorphic and non-isomorphic substructures, and accounts at least for some idealizations. However, neither isomorphism nor partial isomorphism accounts meet the diversity desideratum. They compare the mathematical structure of a model to the mathematical structure of a representation of the target and thus only apply to mathematical models and quantitative features (Weisberg 2013, 137–142). An account in terms of truth (e.g., Kvanvig 2003) may fare better since all kinds of theories can be evaluated with respect to their truth. Moreover, a truth-based approach may account for how well a theory answers to the facts by considering how many propositions of the theory are true, how central they are, and how close the false propositions come to truths. But even if such an account can be developed (which requires, e.g., a theory of truth approximation), it will not apply to non-propositional representations if their content is not fully explicable in terms of propositions and thus fail to meet the diversity desideratum.

More promising is an account of answering-the-facts in terms of similarity. Since anything is similar to anything else in some respect, similarity needs to be restricted. The rough idea is that a theory answers to the facts constituting the target system to the degree that the system as depicted by the theory is in relevant respects similar to the target system. Which respects are relevant depends on the context and the purpose that the theory is intended to serve (Giere 2010). In many contexts, the degree of relevant similarity is a matter of how many features (processes, factors) that make a difference to the behaviour of the target system are represented, and how well (detailed, precise, comprehensive) they are represented. Relevant similarity meets the desiderata because it comes in degrees, can be used to compare idealized models, and can relate all kinds of theories and theory-like representations to their targets (Weisberg 2013, 143). To develop these ideas, one can draw on similarity accounts within the literature on scientific representation.Footnote 9 While most approaches say little about what similarity amounts to, how it depends on context and purpose, and how it can be assessed, such questions are addressed by Weisberg’s (2013, Ch. 8) similarity account of the model-world relationship, which is based on Tversky’s (1977) influential contrast account of similarity. The account starts with the idea that a “model is similar to its target […] when it shares certain highly valued features, doesn’t have many highly valued features missing, and when the target doesn’t have many significant features that the model lacks” (Weisberg 2013, 144–145). This idea is transformed into an account of the model-world relation by considering in detail where the feature sets and the weighting function come from.

Most authors accept some kind of rightness condition for understanding, but even those who reject rightness as a necessary condition could acknowledge it as an evaluative dimension. Even if radically false idealized models and superseded theories can provide some understanding (de Regt 2015), the quality of the understanding can, ceteris paribus, be a function of how well the theory answers to the facts. Rightness is indeed an evaluative dimension of understanding since there are cases such that A 2 ’s understanding of target system S by means of theory T 2 is better than A 1 ’s understanding of S by means of T 1 but the only relevant difference is that T 2 answers better to the facts than T 1 , where A 2 can be A 1 at a later stage and T 2 a successor of T 1 .Footnote 10 De-idealizations are a case in point. In climate science, two types are particularly important. The first concerns the scope of climate processes represented in the model and can be illustrated with the transition from Global Circulation Models (GCMs) to Earth System Models (ESMs). While GCMs represent already a wide range of processes of the physical climate system, ESMs additionally represent biogeochemical processes, such as the global carbon and sulphur cycles. This allows ESMs to explicitly simulate feedbacks between the changing physical climate and biogeochemical cycles that determine the greenhouse gas concentrations, which are in GCMs, for the most part, simply prescribed (Flato 2011). The second type of de-idealization concerns the level of detail and comprehensiveness with which processes are represented in a model. An example is the replacement of empirical parameterizations by sub-models that explicitly resolve the involved processes. Other things being equal, both types of de-idealization improve our understanding of climate change, but neither of them need lead to better predictions and retrodictions, at least not in the near term. A more faithful representation of cloud processes, for example, that performs better when tested individually may even lead to a poorer performance of the model as a whole if the model is biased with respect to aerosol concentration or humidity (Baumberger et al. 2017b). Assuming the performance is still reasonably good, the understanding is improved. The example suggests that this improvement is due to the increased similarity between model and target rather than to other factors, such as an improved ability to retrodict past and predict future climate.

The claim that the goodness of someone’s understanding is a function of how well her theory answers to the facts is a ceteris paribus claim. Often, the ceteris paribus clause is not fulfilled since the increase in similarity between theory and target goes along with a decreased ability to use the theory or a decreased justification. Climate models that represent an ever increasing range of processes and explicitly resolve processes that in predecessor models are included via parameterization may become too complex to understand and too computationally intensive to be run on available computers. And models that represent (e.g., certain biogeochemical) processes that are difficult to observe or have not been systematically observed over a long period of time or over large spatial scales may be less justified than simpler models (Flato 2011, 783).

Grasping

A good starting point for an account of the grasping condition is to look at how scientists determine whether someone understands a theory. Exams play an important role here. We take exams to show whether a student has merely memorized a theory and maybe even memorized certain applications of the theory, or whether she is able to apply the theory to new examples. This suggests that knowing a theory by testimony and knowing in this way that a theory applies to certain cases is not the same as truly understanding the theory, which requires being able to make use of it. This, in turn, suggests that an agent grasps a theory to the degree to which she is able to apply it to actual and counterfactual situations (de Regt 2009; Stuart 2016; Newman 2017).

What the ability to apply a theory involves depends on the kind of theory at issue. In typical cases, it involves the ability to provide predictions and explanations in terms of the theory, given certain information about the target. However, a merely classificatory theory may give some understanding of the facts about a domain, even if it does not enable us to explain these facts. Gijsbers (2013) argues that classifying animals on the basis of their anatomical features allowed biologists of the eighteenth century to make correct predictions, which is a cognitive achievement that deserves to be called understanding, but it did not allow them to explain the features they were able to predict. Khalifa (2012) objects that for every case of understanding without explanation, there exists a correct explanation that would provide greater understanding. But even if Khalifa were right,Footnote 11 this would not show that a general account of grasping should require explanatory abilities. Even if the ability to explain turns out to be necessary for good understanding, there may be lower degrees of understanding that can do without explanation. An account of grasping that meets the diversity desideratum should not even require predictive abilities, at least not in the literal sense of predictions in which they concern future events. The historical natural sciences do not make predictions, the formal sciences and many theories in the humanities not even retrodictions, but theories from these disciplines certainly provide understanding of their subject matters. However, the ability to use a theory always implies the ability to draw consequences of the theory about aspects of the subject matter, which is often not a matter of straightforward deduction, but involves plausible reasoning.

The ability to apply mathematical models involves the ability to accurately calculate results for different initial conditions, boundary conditions and parameter values, and to interpret the results to solve quantitative problems about the target. If the model equations can be solved analytically, the calculations may be performed with the help of pencil and paper or even mentally. In case of models consisting of analytically intractable equations, determining numerical values requires the ability to run the model on a computer to estimate solutions numerically. However, it is often possible to cheat one’s way to a mathematical solution without really understanding it and the underlying model. A good grasp of a model requires therefore also the ability to solve qualitative problems by drawing consequences of the model without performing exact calculations (de Regt and Dieks 2005, 151). This requires a good comprehension of how the model behaviour emerges from the interaction of model components and how it would change if some components were different in various ways. This is why most novices who are almost as competent as their expert teachers with solving quantitative problems fail when it comes to qualitative problems (Newman 2017, 579). The ability to solve quantitative problems by calculating and interpreting results and the ability to solve qualitative problems by estimating consequences may in the case of non-mathematical (qualitative) theories correspond to the ability to answer questions about the target by explicitly going through the argumentation suggested by the theory, and the ability to estimate answers by drawing characteristic consequences of the theory without complete logical argumentation (cf. de Regt and Dieks 2005, 167, fn. 7). Moreover, a good grasp of a theory may involve the ability to assess the conditions and limits of its application, and to judge what types of results it allows about the target. This is particularly important in case of idealized models, the grasping of which requires some awareness of how they diverge from reality and of the conditions under which the divergences are negligible so that the models can be applied to the target.

The ability to use a theory comes in degrees, but at least in certain contexts, one can have some understanding even if one is unable to independently apply the theory to its target. I may have some understanding of global warming by means of a simple Energy-Balance Model, without being able to construct explanations and projections of temperature trends in terms of the model. It suffices that I can follow such explanations and projections when given by someone else. This requires more than the ability to repeat them, and more than a shallow semantic understanding of the model and its applications. Some grasp is needed of how the elements of the model are related and how it is that the model explains the explananda and entails the projections. This grasp involves the ability to provide a qualitative summary of the model (e.g., with the help of a diagram) and reformulate its applications in one’s own words. The suggestion then is that a good grasp of a theory requires being able to apply it to its target (problem-solving abilities), and a minimal grasp being able to follow and reformulate such applications when given by someone else (comprehension abilities).Footnote 12 An account along these lines obviously meets the desiderata of graduality, idealization and diversity.

Most authors accept some kind of grasping condition for understanding and many tried to spell it out in terms of abilities, but even those who reject grasping-as-abilities as a necessary condition should accept it as an evaluative dimension. Even if some understanding is possible if the agent is unable to apply the theory and maybe even to reformulate such applications in her own words, the quality of her understanding is, ceteris paribus, a function of how well she grasps the theory. Suppose two climate scientists understand climate change in terms of the same climate model, their commitment to the model is equally justified and both are able to run the model on a computer and to interpret the results in order to answer a variety of questions about climate change. However, while the second scientist can, due to her experience in experimenting with models, qualitatively assess how the model results would change if certain model components were different, the first scientist is hardly able to anticipate any such counterfactual consequences without running the model on a computer. It seems clear that the understanding of the second scientist is better than that of the first, and that this is due to her better grasp of the model. But as in case of rightness, the ceteris paribus clause will often not be fulfilled. How well one grasps a theory may vary with the degree of one’s justification, and the better grasp may depend on the fact that the theory is more idealized and thus less similar to the target.

Justification

Standard epistemological accounts conceive of epistemic justification as exclusively truth-conducive: justification speaks in favour of a belief being true or not being false. Since these accounts address knowledge and belief, they are not directly applicable to understanding and theories. They cannot even be adapted for an account of the justification condition for objectual understanding since they fail to meet the idealization desideratum. Idealized models and our commitment to them can be justified but we know that the models are not true,Footnote 13 which requires that their justification is not only related to truth. This is in line with the claim, well known from philosophy of science, that the epistemic evaluation of theories needs to appeal to a plurality of epistemic goals—such as Kuhn’s (1977) accuracy, consistency, broad scope, simplicity and fruitfulness—that are not exclusively truth-conducive and admit of trade-offs.Footnote 14 This claim should be accepted if epistemic evaluation is understood as evaluation with respect to a theory’s contribution to understanding (Baumberger and Brun 2017, 169–171): Increasing the explanatory power or simplicity of a theory can enhance our understanding of its target even if the resulting theory is less empirically accurate. A climate model that explicitly resolves cloud processes that predecessor models included via parameterization enhances our understanding of climate change due to its increased explanatory power (it can, e.g., explain feedbacks involving clouds), even if the model is empirically less accurate because of biases with respect to aerosol concentrations.

Theory-choice approaches meet the idealization desideratum, but they cannot simply be used as an account of the justification condition since they fail to meet the diversity desideratum. They are tailored to scientific theories and do not directly apply to non-empirical and normative theories. But they provide a good starting point for such an account. If accuracy is understood in a more general way and an internalist and a reliabilist requirement is added, they allow us to distinguish five dimensions of justification that are relevant in the context of objectual understanding: The degree to which an agent’s commitment to a theory is justified depends (a) on whether the theory is internally consistent and on the degree of its external coherence with background theories and assumptions; (b) the degree to which the theory accommodates the available evidence, which includes observational or observation-based data in case of empirical theories, and intuitions in case of non-empirical theories (cf. Bealer 1996); (c) the degree to which the theory does justice to further epistemic goals, including generally relevant virtues (e.g., precision, simplicity, fruitfulness, broad scope, explanatory power, and completeness with respect to the subject matter) and virtues that are specifically relevant to certain kinds of theories (e.g., visualizability and causality); (d) the degree to which the agent is able to assess how well (a)–(c) are met; and (e) on the reliability of the theory evaluation.

Dimensions (a) and (b) assess how well the agent’s theory answers to the facts. Some epistemic goals in (c) may contribute to such an assessment too; for example, broadening the scope of a theory may be a means of minimizing error since a theory with more (and more diverse) areas of application admits of additional tests (Douglas 2013). However, the epistemic goals in (c) primarily have other functions. They are used to assess the systematicity of a theory (Baumberger and Brun 2017), its intelligibility for the agent, that is, the ease with which the agent can make use of the theory (de Regt and Dieks 2005), and its relevance for specific problems (Hirsch Hadorn and Baumberger 2019). Which goals are relevant, how much weight they should be given and which trade-offs are acceptable depends on the subject matter and on the purpose the theory is intended to serve. If a climate model is developed for the purpose of understanding the basic mechanisms of global climate change, the model should be as simple as possible, but we may not insist on it being useful to effectively calculate exact figures for the key climate characteristics it involves. If, on the other hand, we want to understand regional climate change in a way that makes reliable projections available, we will require that the model be as detailed as necessary and of high enough resolution for effectively computing the relevant climate characteristics with sufficient precision even if this means that the model gets incredibly complicated. Dimension (d) assesses the agent’s metacognitive perspective on her epistemic situation, and (e) how unlikely it is that the evaluation process leads the agent to commit herself to a theory that does not sufficiently answer to the facts or is not relevant for the problem at issue. This last dimension accommodates the intuition that epistemic luck diminishes the degree of understanding (cf. Khalifa 2013b).

An account of justification that involves the dimensions (a)–(e) meets all three desiderata: it acknowledges that justification comes in degrees, accounts for idealizations since justification is not only related to truth, and applies to all kinds of theories.Footnote 15

Most authors accept some kind of justification condition for understanding, but even those who reject justification as a necessary condition should acknowledge it as an evaluative dimension. Even if an agent can have some understanding by means of a theory that is not justified for her because her evidence is insufficient (Dellsén 2016a) or even defeated (Dellsén 2016b; Wilkenfeld 2017), the quality of the understanding is, ceteris paribus, a function of how well her commitment to the theory is justified. Suppose two climate scientists understand climate change by means of the same climate model, and both grasp the model equally well (i.e., are equally able to use it to address quantitative and qualitative problems about climate change). However, while the first scientist has largely to rely on the testimony of her peers for knowing how well the model meets some of the justification conditions, the second can assess by herself how empirically accurate and robust the model results are, how well the model coheres with her background theories and how well it performs with respect to epistemic goals such as simplicity, explanatory power and completeness. It seems clear that the understanding of the second scientist is better than that of the first, and that this is due to her better performance with respect to the internalist requirement. Or suppose both scientists are equally able to assess how well the model meets the justification conditions (a)–(c) but only the second scientist is able to rule out some incorrect rival model that could easily have met the rightness condition. Again, the understanding of the second scientist seems better than that of the first, and this is because his model evaluation is more reliable.

The result becomes even clearer if we assume that the model coheres better with the second scientist’s background theories than with those of the first, for example because the model is incompatible with a background theory of the first scientists, or because the second scientist has a much broader range of background theories that support the model. Incompatibility with a background theory seriously reduces or even destroys one’s understanding, and while it may be possible to gain some understanding by means of an “isolated” theory which is consistent with but mostly logically independent of relevant background theories, further support by background theories boosts understanding by integrating it into a wider picture (Baumberger and Brun 2017, 176). If we modify the example so that the two scientists grasp different climate models that equally well answer to the facts, their justification can differ (also) because one of the models performs better with respect to epistemic goals such as simplicity or explanatory power. Other things being equal, simplifying a model or improving its explanatory power will often advance the understanding of its target.

As in case of rightness and grasping, the ceteris paribus clause will often not be fulfilled. How well an agent is justified in a theory varies with how well the theory answers to the facts and often also with how well the agent grasps the theory. Moreover, some dimensions of justification may not constitute evaluative sub-dimensions of understanding. In the above example, the justification of the models could also differ in that one of them is better supported by data, for example because the other represents processes for which only very limited data are available. But it is less clear whether such a difference influences the degree of understanding.

Commitment

Commitment as it figures in the explication of objectual understanding is an attitude towards some content. This attitude should not be identified with belief as taking a proposition to be true. An account of commitment in terms of belief meets the graduality desideratum but fails to meet the idealization and the diversity desiderata. Belief comes in degrees, but we do not believe idealized models we know to be false, and non-propositional representations are not suitable objects of belief since they are not truth-apt. Moreover, it is often impossible to explicate the content of such representations in terms of beliefs attributable to the agent.

Commitment should rather be explicated in terms of a broader notion of epistemic acceptance. An influential distinction between belief and acceptance is due to L.J. Cohen.Footnote 16 According to him, to believe that p is to be disposed normally to feel it true that p (and false that non-p) when one is attending to issues raised by p or items referred by p. But to accept that p is “to treat it as given that p”, that is, “to adopt a policy of […] including [p] among one’s premises for deciding what to do or think in a particular context, whether or not one feels it to be true that p” (Cohen 1992, 4). The notion of acceptance needs to be construed in a specific way to provide a sound basis for an account of commitment in the context of objectual understanding, which leads to further differences between belief and acceptance. First, belief is a propositional attitude, but the objects of acceptance include non-propositional contents and all kinds of theories and theory-like representations. Second, belief exclusively aims at truth, while acceptance relates to a plurality of epistemic goals. A climate scientist can accept a highly idealized climate model that deviates from truth if the deviations are negligible or compensated by the performance of the model with respect to epistemic goals such as simplicity and fruitfulness. Whether the deviations are negligible or compensated depends on the purpose the model should serve. Thus, third, in contrast to belief, epistemic acceptance is relative to epistemic purposes, which makes acceptance context-dependent in a way that belief is not. A scientist can accept a climate model to project the global mean surface temperature increase by 2100, but reject using the model to project changes in precipitation pattern in the Mediterranean area between 2050 and 2100.

Thus, an agent epistemically accepts a theory to the extent that she takes the theory to be useful for specific epistemic purposes, such as prediction, retrodiction and explanation, ideally based on an evaluation of the theory’s performance with respect to various epistemic goals. Accepting a theory in this sense can be controlled voluntarily, while one may not be able to believe at will. An account of commitment in terms of acceptance meets all three desiderata: commitment comes in degrees,Footnote 17 an agent can commit herself to an idealized model she does not take to be true, and to non-propositional and even non-verbal representations, such as diagrams and graphs.

Commitment is a necessary condition rather than an evaluative dimension. The examples in Sect. 3.2 provide some reason to assume that objectual understanding implies commitment. However, two examples do not establish a general claim, and they concern good understanding while it might be that lower degrees of understanding can do without commitment. More convincing would be examples in which we are inclined to withhold any understanding because the agent does not sufficiently commit herself to the theory in question. But in such examples, the agent typically also lacks justification and/or the rightness condition is not met.

Since it is difficult to provide a case that establishes commitment as a necessary condition, let us see whether there are any arguments against a commitment condition. I am not aware of such an argument, but Wilkenfeld (2017) and Dellsén (2016b) argue that understanding does not require belief. Do their arguments apply to commitment? Certainly not in case of Dellsén who aims to show that understanding may be accompanied by mere acceptance rather than by belief. Wilkenfeld (2017, 321–322), however, provides an example that can be interpreted as suggesting that understanding does not even require acceptanceFootnote 18: Richard, an established scientist, develops a detailed model of the explosion of the Challenger space shuttle, based on the idea that O-Ring failure caused the explosion. His model is entirely correct, but before he goes public, Richard is subjected to a deliberate cover-up with the result that his subjective credences indicate that he puts the probability for explanations in terms of his model at about 30% and the probability of their negation at about 70%. Richard does not believe his model, but he does not even seem to accept his model since he refrains from using it in explanations of the Challenger explosion. This case provides a counter-example to the necessity of a commitment condition only if it is plausible that Richard understands the Challenger explosion in terms of his model. However, it seems more natural to describe the case as one in which the explosion is understandable in terms of Richard’s model, not by Richard but by those who have not been subjected to the same cover-up and are thus disposed to use the model in explanations of the explosion. We should distinguish between “subject matter S is understandable in terms of theory T” and “an epistemic agent A understands S in terms of T” as two different explananda. While an explication of the first does not require an acceptance or commitment condition, the second does.