First published Fri Jul 20, 2018; substantive revision Fri Aug 3, 2018

Many, many infinite regress arguments have been given throughout the history of philosophy, and we will not attempt here to survey even the most important of them. Rather, the aim will be to shed light on the kinds of regress argument that may be encountered, and the different considerations that arise in different cases. As we proceed, however, we will see some particularly famous regress arguments as examples. (Rescher 2010 and Wieland 2014 survey some historical regresses.) [ 2 ]

There are two ways in which a theory’s resulting in an infinite regress can form an objection to that theory. The regress might reveal a bad feature of the theory—a feature that is not the regress itself, that we have independent reason to think is a reason to reject the theory. Or the fact that the theory results in the infinite regress might itself be taken to be a reason to reject the theory. The former cases are the easier ones, since in those cases we do not have to make a judgment as to whether the regress itself is objectionable, we only need to ask about the feature of the theory that the regress reveals. We will look at cases like this first, before turning to cases where the regress itself might be seen as a reason to reject a theory.

An infinite regress is a series of appropriately related elements with a first member but no last member, where each element leads to or generates the next in some sense. [ 1 ] An infinite regress argument is an argument that makes appeal to an infinite regress. Usually such arguments take the form of objections to a theory, with the fact that the theory implies an infinite regress being taken to be objectionable.

1. Regress and Theoretical Vices

Sometimes it is uncontroversial that a theory that generates an infinite regress is objectionable, because the regress reveals that the theory suffers from some kind of theoretical vice that is a reason to reject the theory independently of it yielding an infinite regress. In these cases, an infinite regress argument can show us that we have reason to reject a theory, but it is not because the theory yields a regress per se, but rather because it has this other bad feature, and the regress has revealed that.

1.1 Regress and Contradiction

One such kind of case is when the very same principles of a theory that generate the regress also lead to a contradiction. If this is so then it does not matter what we think about infinite regress in general, we will of course have reason to reject the theory, because it is contradictory. Two such examples are discussed by Daniel Nolan (2001), and we will recount one of them here (also cf. Clark 1988). The theory in question is Plato’s theory of Forms, and the regress objection is Parmenides’s Third Man objection, as reconstructed by Vlastos (1954).

Suppose some things, the \(X\)s, are alike in a certain way: they share some feature, \(F\). The theory of Forms says that if this is so then there is some form, \(F\)-ness, in which the \(X\)s each participate and in virtue of which they have that shared feature. The theory of Forms also says that Forms are self-predicated: the Form of the good is good, the Form of largeness is large, etc. So the Form of \(F\)-ness is \(F\). In that respect, then, it is like each of the \(X\)s. So not only are the \(X\)s all alike in a certain way, the \(X\)s and \(F\)-ness are all alike in a certain way. And so there must be some Form in which each of the \(X\)s and \(F\)-ness participate, in virtue of which they have this shared feature. However, this Form cannot be \(F\)-ness, because the Form must be distinct from the things that have that character that participate in it, so the \(X\)s and \(F\)-ness must participate in a new form \(F_1\)-ness. But \(F_1\)-ness will, just like the \(X\)s and \(F\)-ness, be \(F_1\), given that Forms self-predicate, and so the \(X\)s, \(F\)-ness and \(F_1\)-ness are all alike in a certain way … and so on. We are off on an infinite regress.

Whatever one thinks about regresses in general, the principles that generate this regress must be denied, for they lead to contradiction. The theory of Forms, as presented here at least, tells us (i) that when some things are a certain way, they participate in a single Form, in virtue of which they are that way; (ii) that Forms self-predicate—Forms themselves are the way that things that participate in them are; and (iii) that the Form is distinct from the things that participate in it. Each of these three claims is essential to generating the regress: without (i) we don’t get the existence of a Form in the first place, without (ii) we don’t get the second list of things with a shared feature, so would stop simply with the \(X\)s participating in some Form, and without (iii) we could conclude that just as the \(X\)s participate in \(F\)-ness, so do the \(X\)s and \(F\)-ness itself—there would be no need for a new Form. So (i)–(iii) generate a regress, and they are each needed to do so. But (i)–(iii) are inconsistent, and no regress argument is needed to show that. (i) and (ii) together entail that Forms participate in themselves. (ii) tells us that \(F\)-ness is \(F\), and (i) tells us that if this is so it is in virtue of \(F\)-ness participating in \(F\)-ness, since that is how in general things get to be \(F\). So \(F\)-ness participates in itself, and so by (iii) \(F\)-ness must be distinct from itself, since Forms are distinct from that which participates in them. But nothing is distinct from itself: contradiction.

We don’t need an argument against infinite regresses to show that this version of the theory of Forms is no good: that it is contradictory is the best reason we could have to reject it. Now, that the theory is contradictory and that it leads to an intuitively worrying infinite regress are not unrelated. Beyond the mere ontological profligacy involved in being committed to infinitely many Forms any time we notice that some things are some way (we will come back to ontological profligacy and regress in section 4), what seems intuitively problematic about the regress of Forms is that we shouldn’t get a new Form each time. The \(X\)s are \(F\), so we have the form of \(F\)-ness in which they participate. We then get a new Form in which the \(X\)s and \(F\)-ness all participate. But why do we have a new Form? We don’t have a new shared feature, we have the very same shared feature we started with: just as the \(X\)s are all \(F\), so is \(F\)-ness itself. We already have the Form that makes something \(F\): \(F\)-ness. The regress is troubling because we shouldn’t be invoking a new Form, but we have to because of the ban on Forms participating in themselves. But this diagnosis of why the regress is troubling is really just another way of stating the contradiction at the heart of the theory: the Form \(F\)-ness is supposed to be in general the thing in virtue of which things get to be \(F\), but it also cannot be because it itself must be \(F\) and it cannot participate in itself. So the regress and the contradiction are intimately related. As Nolan (2001, 528) puts it: “infinite regresses of this sort and the statement of formal contradiction are different ways of bringing out [the same] unacceptable feature.”

This is an easy case, because we don’t have to adjudicate on whether the fact that the theory leads to an infinite regress is itself objectionable. The principles that lead to regress also lead to contradiction, and we know that a theory’s being contradictory is a good reason to reject it, whether it leads to regress or not.

1.2 Local Theoretical Vices

More generally, if the features of a theory that result in an infinite regress also result in some theoretical vice that we know to be objectionable independently of whether or not there is a regress, then we have a reason to reject that theory that doesn’t depend on the theory leading to regress. Sometimes, the theoretical vice in question will be a global one: a feature that is a reason to reject any theory that has it. Yielding a contradiction is, relatively uncontroversially[3], such a vice. Other times, the feature in question might be a local vice: a feature that might be an unobjectionable feature of certain theories, but a reason to reject a particular theory \(T\) because of the particular theoretical ambitions of \(T\) or as a result of other things we know about \(T\)’s subject matter.

For example, a theory might result in an infinite regress of entities and, as a result, entail that there are infinitely many things. This in itself is, arguably, not objectionable. But it might be a local vice to a theory if we have independent reason to think that we are dealing with a finite domain. (See Nolan 2001, 531–532.) (Some philosophers object to the very idea of reality containing infinities. Aristotle, e.g., famously allowed that there could be potential infinite series, but not completed infinite series. (See Mendel 2017.)) But we will ignore such general anti-infinitism in this entry, for it is infinity itself that such theorists take to be objectionable, not infinite regresses per se.)

Peano’s axioms for arithmetic, e.g., yield an infinite regress. We are told that zero is a natural number, that every natural number has a natural number as a successor, that zero is not the successor of any natural number, and that if \(x\) and \(y\) are natural numbers with the same successor, then \(x = y\). This yields an infinite regress. Zero has a successor. It cannot be zero, since zero is not any natural number’s successor, so it must be a new natural number: one. One must have a successor. It cannot be zero, as before, nor can it be one itself, since then zero and one would have the same successor and hence be identical, and we have already said they must be distinct. So there must be a new natural number that is the successor of one: two. Two must have a successor: three. And so on … And this infinite regress entails that there are infinitely many things of a certain kind: natural numbers. But few have found this worrying. After all, there is no independent reason to think that the domain of natural numbers is finite—quite the opposite.

By contrast, consider the following two principles: (i) Every event is preceded by another event that is its cause; (ii) The relation \(x\) precedes \(y\) is irreflexive (nothing precedes itself), asymmetric (if \(a\) precedes \(b\) then \(b\) does not precede \(a\)) and transitive (if \(a\) precedes \(b\) and \(b\) precedes \(c\) then \(a\) precedes \(c\)).

This yields an infinite regress, at least from the assumption that there is at least one event. If there is an event, \(E_1\), then it is preceded by its cause. That cause cannot be \(E_1\), as nothing precedes itself and causes precede what they cause. So the cause of \(E_1\) must be a new event, \(E_2\). This event is preceded by its cause. This cannot be \(E_2\) for the same reasons as before, and it cannot be \(E_1\) because then each of \(E_1\) and \(E_2\) would precede the other in violation of asymmetry. So the cause of \(E_2\) must be a new event, \(E_3\). \(E_3\) is preceded by its cause. It cannot be \(E_3\) or \(E_2\) for reasons similar to before. And it cannot be \(E_1\), for then \(E_1\) would precede \(E_3\), but since \(E_3\) precedes \(E_2\) which precedes \(E_1\), transitivity entails that \(E_3\) precedes \(E_1\), and so \(E_1\) cannot precede \(E_3\) due to asymmetry. So the cause of \(E_3\) must be a new event, \(E_4\). And so on …

This regress of events is very similar to the regress of natural numbers. In each case we start from the claim that there is a thing of a certain kind (a number or an event), and we have a principle that tells us that for each thing of that kind, there is another thing of that very kind that bears a certain relation to the previous one (it is its successor, or it is its preceding cause). We then have supplementary principles that rule out the other thing of that kind being any of the things on our list so far, thus forcing us to introduce a new thing of that kind, thus inviting the application of the principles to this new thing, and so on ad infinitum. But while the regress and resulting infinity of natural numbers is arguably unobjectionable, the regress of events seems problematic, because we have good empirical reasons to deny that there are infinitely many events, each preceded by another. For either that infinite sequence of events takes place in a finite amount of time or an infinite one. We have good empirical reason to rule out the latter option, since we have good empirical reason to think that there has only been a finite amount of past time: that time started a finite time ago with the Big Bang. And we have good empirical reason to rule out the former option, since the only way of fitting an infinite series of events, each preceded by another, into a finite stretch of time is by having the time between them become arbitrarily small. So for example, \(A_2\) might be a minute before \(A_1\), and \(A_3\) half a minute before \(A_2\), and \(A_4\) a quarter of a minute before \(A_3\), etc. If the time between events \(A_n\) and \(A_{n + 1}\) is always half the time between events \(A_{n - 1}\) and \(A_n\), we can fit infinitely many events into a two minute time-period. But this requires the time between events to become arbitrarily small, and there is some reason to think that time is quantized, such that there is a minimum length of time during which a change can occur, thus providing a lower limit on the amount of time that can separate two temporally distant events. (See Nolan 2008b for relevant discussion and further references.)

So while the numbers regress and the events regress are structurally analogous, we might find the principles that yield the regress objectionable in one case but not the other, because while each regress entails that there are infinitely many things of kind \(K\), whether that is a vice may depend on what kind \(K\) is, and whether we have independent reason to think that the domain of \(K\)s is a finite one. Yielding infinitely many things of kind \(K\) might be a local vice when the kind in question is events separated in time, but not when the kind in question is natural numbers structured by the successor relation.

1.3 Regress and Failure of Analysis

In the previous section we saw two theories generating similar regresses, but where one is found unobjectionable whereas the other is found objectionable due to the different things we think we know, independently of encountering these regress arguments, about the subject matter of the theories. We could also have cases where a single theory yields a regress that is objectionable by the lights of one theorist and not another, as a result of their differing theoretical commitments leading one but not the other to think that a feature revealed by the regress is a vice.

Consider Bradley’s regress. (Bradley 1893 [1968], (21–29). Note, however, that Bradley is very hard to interpret, and there is much debate concerning how to reconstruct his argument. See the entry on Bradley’s regress for discussion.) We start with the demand to give an account of predication: what is it for \(A\) to be \(F\)? One answer is that it is for the particular \(A\) to be bound to the property of \(F\)-ness. But this answer yields a new predication: \(A\) is bound to \(F\)-ness. If the original monadic predication of \(A\) demands an account, this relational predication of \(A\) and \(F\)-ness also demands an account. Since before we posited a property corresponding to the monadic predicate and said that the property was bound to the object that was the subject of predication, we should follow suit here and posit a relation corresponding to the dyadic predicate and say that it is bound to the two things that are the subjects of that predication. So we should posit a relation—let’s call it instantiation—that binds together \(A\) and \(F\)-ness. But this yields another new predication: Instantiation binds \(A\) to \(F\)-ness. Now this triadic predication needs to be accounted for, and so we need a relation corresponding to the triadic predicate—let’s call it the instantiation\(_2\) relation—that binds together the instantiation relation, \(A\), and \(F\)-ness. But this yields yet another predication, this time a tetradic one: Instantiation\(_2\) binds Instantiation to \(A\) and \(F\)-ness… And so on. Each predicational fact tells us that some particulars, properties, and relations are bound together, which forces us to posit a relation corresponding to that binding, which generates the next predicational fact, and so on ad infinitum.

Is this regress objectionable? Arguably it depends on what we want from an account of predication. If what you want is an analysis of predication, then arguably this regress is objectionable. If you start off not understanding predication—if you find it just utterly mysterious what ‘\(A\) is \(F\)’ is meant to mean, given that this ‘is’ is not identity—then this regress means that this account is not going to help you, for each answer simply invokes another predication, which is exactly what you don’t understand. If you don’t understand ‘\(A\) is \(F\)’, you’re not going to understand ‘\(A\) is bound to \(F\)-ness’, or ‘Instantiation is bound to \(A\) and \(F\)-ness’, or etc., since the ‘is’ in those claims is also not the ‘is’ of identity but the ‘is’ of predication, which you find a mystery. However, if you understand predication perfectly fine and want simply an ontological account of it—an account of what in the world makes true the true predications—then arguably the regress is not objectionable, because while there are infinitely many true predications, these can all be made true by the one underlying state of the world: the state of affairs of the particular \(A\) being bound to the property \(F\)-ness. Just as this state of affairs makes it true that \(A\) is \(F\), so does it make it true that \(A\) is bound to \(F\)-ness, and that this binding holds between \(A\) and \(F\)-ness, and etc. We would have one ontological underpinning for the infinitely many true predications. As Nolan (2008a, 182) puts it, we have “Ontology for each truth, and no infinite regress of [states of affairs], but only of descriptions of [states of affairs]: and it should never be thought that an infinite regress of descriptions is something to worry about per se”. (Cf. Armstrong 1974 and 1997 (157–8).) So whether or not we will find this regress objectionable depends on what we demand of an account of predication. We can all agree that the regress shows that the account does not yield an analysis of predication. Is this a theoretical vice? It depends on whether or not predication requires an analysis, and that depends on our theoretical goals.

Another regress that arguably fits this pattern is McTaggart’s (1908) regress argument against the reality of the A-series of time. The A-series of time is the sequence of times one of which is present, others past, and others future. (As opposed to the B-series, which merely says that some times are before others, some after others, etc., without singling out any as being past, present, or future.) But, says McTaggart, the times that are past, were present; the time that is present was future and will be past; the times that are future will be present and will be past. McTaggart concludes that we end up attributing each A-property (is present, is past, and is future) to every time (and therefore to every event in time). But this is absurd, because the A-properties are incompatible: to have one is to have neither of the others. So we end up in contradiction: each time both has only one such property, and all such properties. McTaggart concludes that the A-series cannot be real.

An obvious response to McTaggart’s argument is this: it’s not that some future time is present and past as well, it’s merely that it will be present and will be (later still) past. That is: no time and event has more than one of the A-properties, it is merely the case that they have one A-property and did and will have another. But McTaggart thinks this response does not solve the problem, because it leads to regress. He says (ibid., 469):

If we avoid the incompatibility of the three characteristics by asserting that M is present, has been future, and will be past, we are constructing a second A series, within which the first falls, in the same way in which events fall within the first… the second A series will suffer from the same difficulty as the first, which can only be removed by placing it inside a third A series. The same principle will place the third inside a fourth, and so on without end. You can never get rid of the contradiction, for, by the act of removing it from what is to be explained, you produce it over again in the explanation. And so the explanation is invalid.

McTaggart’s argument is difficult and philosophers disagree on how to interpret it, but here is one interpretation. (See Dummett 1960 and Mellor 1998 (72–74) for two among many presentations of the argument along similar lines. See Cameron 2015 (Ch. 2) and Skow 2015 (Ch. 6) for recent discussion.)

The way things were and the way things will be seems to be part of reality in a way that, for example, Bilbo’s finding the One Ring is not, it’s merely part of a fiction. History is not a fiction, it’s part of our world, so historical truths and future truths should, seemingly, be part of the overall account of how the world is. But that is puzzling, given that things change and, hence, the way things were is incompatible with the way things are now. How can they both contribute to the way reality is if they are incompatible? McTaggart’s argument focuses on a particular instance of this concerning the A-properties. Caesar’s crossing the Rubicon is past. But it was present, and so its presentness is a feature of our world’s history. But our world’s history, as we just said, is part of the complete account of how our world is, and so the complete account of how our world is includes both Caesar’s crossing the Rubicon being past and it being present, and yet those are incompatible features. The defender of the A-series replies by insisting that in giving the complete account of how reality is we have to take seriously the fact that reality changes and that it is, therefore, different ways successively, and there is no inconsistency in things being one way and then another, incompatible, way. So it’s not that reality is such that Caesar’s crossing the Rubicon is both past and present, it’s that reality is such that Caesar’s crossing the Rubicon was future, and was present, and is now past.

McTaggart responds by restating this response in terms of second-order A-properties. To say that Caesar’s crossing the Rubicon was future, and was present, and is now past is to say that it has the properties of being past future (i.e. having been future), past present (i.e. having been present) and present past (i.e. now being past). And similar reasoning to the above suggests that every time and event has each of the nine possible second-order A-properties; and while it’s not the case that any two of them are incompatible, certain pairs of them are. Nothing can be both past past and future future, for example: if \(E\) has been a mere past event, it can’t be true that \(E\) will be something that is yet to happen. And yet the complete account of reality seems to include both Caesar’s crossing the Rubicon being past past and its being future future. After all, in 2000 BCE Caesar’s crossing the Rubicon was future future, since 1000 BCE was the future and Caesar’s crossing the Rubicon would still be the future then. And Caesar’s crossing the Rubicon is past past just now, since 1000 CE is past, and Caesar’s crossing the Rubicon was past then. But both 2000 BCE and the present are part of the overall history of the world, so the goings on at each time are part of what the world as a whole is like, and so Caesar’s crossing the Rubicon is both past past and future future, and yet those are incompatible. And again, the defender of the A-series will respond that it’s not that Caesar’s crossing the Rubicon is both past past and future future, it’s that it is now past past and was future future. And McTaggart will respond that this is to invoke third-order A-properties—being present past past, being past future future, etc. And the same problem will arise, and invite the same response, which will lead to the same problem concerning fourth-order A-properties, which will invite the same response again … and so on, ad infinitum.

Whether McTaggart’s regress is vicious has proven a subject of much debate. Some philosophers see the regress as demonstrating that any attempt to describe the world in A-theoretic terms is ultimately inconsistent, and see the A-theorist as merely invoking another inconsistent account of reality every time they attempt to explain away this inconsistency. Mellor, for example, says (1998, 75) “[McTaggart’s] critics react by denying the viciousness of the regress. At every stage, they say, we can remove the apparent contradiction by distinguishing the times at which the events have incompatible [A-properties]. They ignore the fact that the way they distinguish these times … only generates more contradictions.” Others see McTaggart as simply making a confused challenge at each stage—as mistakenly concluding from the fact that things were one way and are now another incompatible way that they are both ways at once—and continuing to make the same mistake in response to each of the A-theorist’s correct explanations that in fact the incompatible properties are only ever had one after another, never at the same time. Skow (2015, 87), e.g., says “At each stage [McTaggart] accuses objective becoming [i.e. things changing their A-properties] of being inconsistent, and [the A-theorist] shows that allegation to be false. And even an infinite sequence of false allegations does not add up to a good argument.”

This is arguably another case where whether or not the regress is vicious by a philosopher’s lights will depend on their background theoretical commitments. If one starts out happy with the notion of succession—i.e. of one thing being the case and then ceasing to be the case as something else comes to be the case—then one may be inclined to see McTaggart’s regress as entirely benign. We start out with a set of incompatible properties that are never had by anything simultaneously but are held by things successively; in stating that those properties are had successively we make salient a new set of incompatible properties, but these are also never had by anything simultaneously, only successively; this makes salient yet another set of incompatible properties, and so on ad infinitum. There is never, at any stage, a contradiction, if the notion of succession is indeed in good standing, for we are never forced to say that a thing has incompatible properties, only that a thing successively has properties that cannot be had simultaneously. If, by contrast, one is suspicious of the very notion of succession—if one sees in it simply an attempt to paper over what is ultimately a contradiction inherent in the commitment to reality being each of two incompatible ways, the way it supposedly is now and the way it supposedly was or will be—then one will see in McTaggart’s regress an infinite sequence of contradictory accounts of how reality is, with each attempt at explaining away the contradiction simply resulting in another such contradictory account. The regress, then, looks vicious or benign depending on whether one is content to grant the legitimacy of the notion of temporal succession.

2. Foundations, Coherence, and Regress

In section 1 we looked at cases where an infinite regress is taken to reveal some feature that might, possibly depending on your other theoretical commitments, be taken to reveal a feature of a theory independent of its leading to regress that is a reason to reject it. But sometimes the regress itself is taken to be an objectionable feature of the theory that yields it.

Suppose that there is an \(X\) that is \(F\), and that to account for why \(X\) is \(F\) we need to appeal to another \(X\) that is also \(F\). Now there is the question as to why this \(X\) is \(F\), and so we need to appeal to another \(X\) which is also \(F\) …. If this proceeds ad infinitum, with a new \(X\) invoked at each new stage in the process, there is a concern that we end up without having accounted for the \(F\)-ness of any of the \(X\)s. If this infinite regress argument is successful then our choices are either:

Foundationalism: To halt the regress by taking there to be a foundation—a set of \(X\)s whose \(F\)-ness is taken to be basic, and by which we can account for the \(F\)-ness of all other \(X\)s.

or:

Coherentism: To resist an infinite regress by allowing a circular or holistic explanation of the \(F\)-ness of at least some \(X\)s.

Infinite regress arguments used to motivate Foundationalism or Coherentism appear in many different areas of philosophy. Here are some highlights:

In metaphysics:

Metaphysicians have wanted to account for the very existence, or nature, of some things by appealing to things on which they ontologically depend: for example, a complex object exists and is the way it is because its parts exist and are the way they are; a set exists because its members exist; etc. (See Fine 1995 and Koslicki 2013 for discussion.) But of course the things the dependent beings depend on must themselves exist as well. Some have been suspicious of the idea that this can go on ad infinitum, with every thing being ontologically dependent on some new thing(s), and thus have argued for Metaphysical Foundationalism: the view that there is a collection of absolutely fundamental[4] entities upon which all else ultimately ontologically depends. Aquinas, e.g., holds that events are ontologically dependent on their causes, and that an infinite regress of causes and effects would be an infinite series of things each of which is ontologically dependent on the next, and this is impossible.[5] Thus he concludes that there must be a first cause of all else that is itself uncaused—namely, God. We shall see more examples of Metaphysical Foundationalists below. See also the supplementary document on

Metaphysical Foundationalism and the Well-Foundedness of Ontological Dependence.

(Metaphysical Coherentism—the view that ontological dependence could be a holistic phenomenon—has received few defenders, but see Barnes 2018, Nolan 2018, Priest 2014 (Chs. 11 & 12) for some discussion.)

In epistemology:

Epistemologists want to account for the justification of our beliefs. We do not want to believe at random, we want our beliefs to be justified—that is, we want there to be a reason to believe the propositions we believe. But those reasons will be further propositions, and if our initial belief is to be justified, so surely must the reasons for that belief be, and so we must appeal to yet more propositions, and so on. Many—going back to Sextus Empiricus (Outlines of Pyrrhonism PH I, 164–9)—have thought that this cannot proceed ad infinitum, and that the only serious options are Epistemic Foundationalism—the view that there is a class of propositions whose justification does not come via some other justified propositions, and that can provide a reason for everything else we believe—or Epistemic Coherentism—the view that a collection of propositions can collectively be justified in virtue of the web of epistemic relations they stand in to one another. Thus Sosa (1980, 3) says “epistemology must choose between the solid security of the ancient foundationalist pyramid and the risky adventure of the new coherentist raft.” (See the entries on foundationalist theories of epistemic justification and coherentist theories of epistemic justification for surveys of Epistemic Foundationalism and Epistemic Coherentism, respectively).

In ethics:

As well as asking about the source of justification for our moral beliefs (see e.g., Sinnott-Armstrong 1996), moral philosophers have been concerned with distinctively moral regresses, which arise not when we attempt to account for the justification of a moral claim, but rather when we attempt to account for the moral status of something by appealing to something else of the same moral status, and so on. Aristotle (Nicomachean Ethics, 1094a) thought that some things were good because we desire them for the sake of something else that is good. But if everything that is good is good simply because it aims at something else that is good, this would lead to a regress and “all desire would be futile and vain”. And so Aristotle argues that there must be a Highest Good—something that is desired for its own sake—that other things can be good in virtue of aiming towards this highest good. Aristotle is a Moral Foundationalist: there is something whose goodness does not get explained by reference to anything else, by means of which the goodness of other things is accounted for.

(Explicit statements of anything other than Foundationalism in the moral case are hard to come by.[6] But see Roberts 2017 for relevant discussion.)

However it is also always possible to simply embrace the regress and accept:

Infinitism: The \(F\)-ness of each \(X\) is accounted for by facts involving a new \(X\) that is \(F\), and this proceeds ad infinitum.

Whether in metaphysics, epistemology, or ethics, Foundationalism has often been seen as the default, orthodox, view, with Coherentism being seen as the radical alternative. Infinitism is often simply dismissed, or not even considered as a live option. Foundationalism and Coherentism are (often[7]) motivated by the thought that if the \(F\)-ness of each \(X\) is accounted for by appeal to a new \(X\) that is \(F\) then each of those infinitely many explanations fails. The thought is that each explanation of the \(F\)-ness of an \(X\) would be dependent on the success of the next, a promissory note that is never paid if that process does not end. Let’s examine this anti-Infinitist thought.

To focus our inquiry, consider the case of a complex object and its proper parts. Some metaphysicians have considered the possibility that there are gunky objects: objects such that every part of them itself has proper parts. If \(A\) is gunky then it is composed of some things, the \(X\)s, such that there is more than one of the \(X\)s, and \(A\) is not amongst the \(X\)s. Pick one of those \(X\)s, \(X_1\). \(X_1\) is composed of some things, the \(Y\)s, such that there is more than one of the \(Y\)s, and neither \(A\) nor \(X_1\) is amongst the \(Y\)s.[8] Pick one of those \(Y\)s, \(Y_1\). \(Y_1\) is composed of some things, the \(Z\)s, such that there is more than one of the \(Z\)s, and neither \(A\), nor \(X_1\), nor \(Y_1\) is amongst the \(Z\)s. And so on. This process will never end: each item in the series is a collection of entities (a collection containing just one thing in the first case, and more than one thing in every subsequent case), and given the transitivity of parthood each thing in each collection will be a part of \(A\) and hence—since \(A\) is ex hypothesi gunky—will itself be composed of a collection of proper parts, and so we can therefore pick any member from any collection to generate the next item on the list. (Of course, a thing need not—and if it is gunky, will not—have a unique decomposition: there can be two collections of things, the \(X\)s and the \(Y\)s, each of which compose \(A\), where none of the \(X\)s is amongst the \(Y\)s and vice versa. But all we need is that there are some such collections, from which we can pick arbitrarily to get the next collection in the sequence.) Now take the infinite series that consists of \(A\) as its first element, \(X_1\) as its second element, \(Y_1\) as its third, etc. So we form this infinite sequence by taking one item from each collection that formed the previous infinite sequence: namely that item which was used to give us the collection of things that came next in the series.

Assuming that complex objects are ontologically dependent on their proper parts, we now have an infinite regress of entities, each of which is ontologically dependent on the next. Such an infinite regress has been thought by some metaphysicians to be objectionable, leading them to reject the possibility of gunk. Leibniz, for example, argues that there cannot be only “beings by aggregation” (i.e., composite objects), because this would lead to an infinite regress, with each being by aggregation being made up of further beings by aggregation, and so on ad infinitum.

Leibniz’s idea seems to be that if each thing depends on some other, there could not be anything at all in the first place. The thought is that ontologically dependent entities inherit their existence, or being, from that on which they depend; so if this chain of dependence does not terminate, the whole process couldn’t get off the ground, and there would be nothing at all. Leibniz says (1686–87, 85):

Where there are only beings by aggregation [composite objects], there are no real beings. For every being by aggregation presupposes beings endowed with real unity [simples], because every being derives its reality only from the reality of those beings of which it is composed, so that it will not have any reality at all if each being of which it is composed is itself a being by aggregation, a being for which we must still seek further grounds for its reality, grounds which can never be found in this way, if we must always continue to seek for them.

By contrast, if the dependence runs in the other direction—if we start off with a fundamental entity whose being can then ground the being of each subsequent entity—there is arguably no problem if that process continues ad infinitum: thus the infinite sequence where you start with a thing, form its singleton set, then form that thing’s singleton set and so on ad infinitum has not been thought to be objectionable by those who reject gunky objects, for it is the set that is ontologically dependent on its members, not vice versa. (See Fine 1994 for discussion of the direction of ontological dependence. See Cameron 2008 and Maurin 2007 for discussion of the difference between infinite regresses where ontological dependence runs upwards from ones where it runs downwards. Cf. Clark 1988, and also Johansson 2009 and the discussion in Maurin 2013

A contemporary sympathizer with Leibniz’s thought is Jonathan Schaffer (2010). Unlike Leibniz, Schaffer grants the possibility of gunky objects, but thinks that this possibility is precisely a reason to deny that complex objects are (always) ontologically dependent on their parts.[9] Instead, Schaffer takes the possibility of there being no simple things as a reason to hold that the dependence flows in the other direction: that parts are dependent on the wholes of which they are parts, and that every thing is thereby ontologically dependent on the biggest thing that there is—the cosmos—that has everything else as a proper part.[10] Since classical mereology guarantees that there is a biggest thing—the thing that has all else as proper parts—but it does not guarantee that there are any smallest things—things that have no proper parts—it guarantees that if parts are dependent on the wholes of which they are parts then there will be a first, ontologically fundamental, element, whereas if wholes are dependent on their parts then there is the possibility of an infinite regress in which each thing is dependent on some further thing(s), with nothing being fundamental: a possibility in which, Schaffer (2010, 62) says (agreeing with Leibniz), “Being would be infinitely deferred, never achieved”. Leibniz and Schaffer advocate Metaphysical Foundationalism: the view that there have to be some things that are absolutely fundamental—dependent on nothing—on which all else ultimately depends.

But is it true that if the \(F\)-ness of each \(X\) is dependent on the next \(X\) in the sequence being \(F\), and if this goes on without end, that we cannot explain why any \(X\) is \(F\)? Schaffer claims that in the case of an infinite regress of ontological dependence, with each entity depending on the next in the chain, and no independent entities, being would be “infinitely deferred, never achieved”. Why think this? The idea seems to be that a dependent entity only has the being it has on condition of something else having being. If \(A\) is ontologically dependent on \(B\) then the existence of \(A\) is a promissory note, only paid if \(B\) itself exists. But if \(B\) is ontologically dependent on \(C\) then the existence of \(B\) is a promissory note, only paid if \(C\) exists … and so on, so that if this process never stops, the promissory note is never paid, in which case, allegedly, the existence of all these things could never get off the ground in the first place.

An analogy may help. Suppose Anne has no sugar, and needs some. She can borrow a bag of sugar from Breanna. Now Anne has a bag of sugar. Where did it come from? Easy—it came from Breanna, who is now a bag of sugar down. But suppose Breanna borrowed a bag of sugar from Craig in order to then pass it on to Anne. Where did Anne’s bag of sugar come from then? Ultimately, from Craig, who ends up a bag of sugar down. But suppose Craig borrowed a bag of sugar from Devi … and so on, ad infinitum. Then where did the bag of sugar come from? At the end of the infinite sequence, Anne is one bag of sugar up, and nobody is a bag of sugar down, for everyone after Anne simply borrowed a bag of sugar, and then passed it on to the next person in the chain. There’s an extra bag of sugar in the system that seems to have appeared as if by magic. If there is a finite sequence of borrowers, however long, then the last person in the chain ends up a bag of sugar down, so that’s where the bag of sugar that ends up with Anne ultimately comes from. But if the chain never ends, Anne ends up with a bag of sugar that doesn’t seem to have come from anybody, as nobody has lost any bags of sugar—they all just borrowed it to pass it on. The infinite regress seems to create sugar from nowhere: pleasant, perhaps, but metaphysically suspicious all the same.

As with sugar, likewise with being—or justification, or goodness, or whatever feature we aim to account for. If \(A\) depends on \(B\) and \(B\) is fundamental, where did \(A\)’s being come from? From \(B\). If \(A\) depends on \(B\) and \(B\) depends on \(C\), where did \(A\)’s being come from? Ultimately, from \(C\). And for any finite chain, no matter how long, we can say where the being of any dependent entity ultimately comes from: from the fundamental thing(s) at the bottom of the chain. But if the chain is infinite, the being of any thing is, arguably, as mysterious as Anne’s new bag of sugar. The explanation of where it came from is always postponed, and its presence in the system as a whole unexplained. So, at least, goes the regress objection.

3. Regress and Global and Local Explanation

Distinguish between a local explanation of the \(F\)-ness of some particular \(X\) and a global explanation of why there are any things that are \(F\) at all. Some philosophers have argued that when we have an infinite regress, with the \(F\)-ness of each \(X\) being accounted for by appeal to another \(X\) that is \(F\), then we do indeed lack a global explanation of why there are things that are \(F\), but we nevertheless have a local explanation for each of the infinitely many \(X\)s as to why it is \(F\). (This seems to be the position of Hume’s Cleanthes in Part IX of Hume 1779.)

Ricki Bliss, e.g., speaking of the infinite regress of ontologically dependent entities, says (2013, 408): “In a reality that admitted of no foundations … although everything has its reality accounted for in terms of that upon which it depends, we have failed to explain how the whole lot of them—everything—has any reality at all.” Compare to the infinite borrowers case: for any given instance of someone having received a bag of sugar, we can explain where that bag of sugar came from: it came from the next person in the chain. But what we can’t explain is a global fact about the series as a whole: why is this bag of sugar in the series in the first place?

But Bliss argues that it is not necessarily a mark against infinitely descending chains of ontological dependence that it leaves this global fact—why does anything have being in the first place?—unexplained. She says (ibid.):

[T]he regress is not designed to answer this question. All the regress can tell us is how each individual member has the property under consideration, namely, in dependence upon something else. The appearance of an infinite regress should not lead us to conclude that nothing within the regress has the property under consideration—nor has its possession of that property unexplained—but rather that not everything about the possession of the property that needs to be explained has been.

Contra Leibniz and Schaffer, then, Bliss rejects the idea that in an infinitely descending chain of ontological dependence, being would never be achieved. Having a property dependent on some condition is nevertheless to have that property, so there is no pressure, she argues, to conclude that nothing in the infinite series would exist. Rather, they all exist, and the existence of each is perfectly well accounted for: it exists because the next thing in the sequence does. Everything has its being merely on some condition, but the condition is always met. Why there is an infinite chain of existing entities at all is not accounted for, but Bliss says it is a mistake to think that the regress was ever supposed to account for that.

Bliss concludes that whether or not an ontological infinite regress is vicious or benign depends on what we set out to give an account of. If all we want is an account of why each thing exists, then it is benign; but if we want an account of why there are things at all, it is vicious.[11] She says (ibid., 414):

If \(x\) is grounded in \(y\) and \(y\) in \(z\), [and so on ad infinitum] and all that we are seeking for is an explanation of how or why \(x\) exists (as the thing that it is), an explanation of how or why \(y\) exists, and so on ad infinitum, the regress is benign. Why? Because where \(z\) explains \(y\) and \(y\) explains \(x\) our explanans and explanandum are not of the same form. In order to explain facts about my existence, we can make recourse to the existence of—or facts of the existence of—my parents, my vital organs, etc. In order to explain these facts, we make recourse to further facts, and so on. At each stage, we have a satisfactory explanation of that for which we are seeking one . . . The regress is not benign, however, if what we are seeking an explanation for is how anything exists, or has being, at all. For even at infinity, what the regress shows is that we have not explained where existence comes from. Even at infinity, we are still invoking things that exist in order to explain how anything exists at all… We encounter, at each level, the explanatory failure characteristic of a vicious infinite regress: existents whose existence we seek an explanation for are explained in terms of existents. The existence of \(y\) may explain the existence of \(x\) but the existence of \(x\), \(y\), \(z\), and so on ad infinitum cannot help us explain how anything exists at all—where being comes from. Whether or not a regress of grounds is vicious, therefore, will depend upon the question for which we are seeking an answer.

Similar remarks are made by Graham Priest (2014, 186), who asks us to imagine an infinite sequence of objects, \(a_0\), \(a_{-1}\), \(a_{-2}\), … etc. Each of these can be in one of two states: active or passive. For each \(n \le 0\), if \(a_{n-1}\) is passive it does nothing, but if it is active it instantaneously makes \(a_n\) active as well; and the only way for \(a_n\) to become active is that \(a_{n-1}\) makes it so. Given this set-up there are only two possible options: each object in the chain is active, or each is passive. But both are logically possible options: the fact that each object is only active if the next object makes it active (and this sequence continues without end) gives us no reason, says Priest, to reject the possibility of them being all active. The active status of each object would be accounted for, by the active status of the previous one. First we explain the active status of \(a_0\): it is explained by the active status of \(a_{-1}\). Then we have a completely different thing to explain: the active status of \(a_{-1}\): it is explained by the active status of \(a_{-2}\) … and so on. All such facts get explained and since, Priest argues, it is a different fact being accounted for each time, this regress is not vicious. But echoing Bliss, Priest admits that something is not explained. He says (ibid., 187) “[W]hat an infinite regress will not explain is why the whole regress is as it is … the state of each \(a_n\) is determined by the state of \(a_{n-1}\), but there is nothing in this story to explain why the whole system is in the all active state, as opposed to the all passive state. If there is such an explanation, it must come from elsewhere.”

So if \(a\) can only exist if \(b\) exists for \(a\) to be ontologically dependent on, and \(b\) can only exist if \(c\) exists for \(b\) to be ontologically dependent on … and so on ad infinitum, either the whole infinite sequence of things exists, or none of them do. And if the whole infinite sequence exists, there is no explanation (from within the sequence at least) as to why anything exists at all. However, there is an explanation for each particular thing as to why it exists: it exists because the next thing in the sequence does. If Bliss and Priest are correct, then whether or not an ontological infinite regress is vicious or benign depends on our explanatory ambitions: are we attempting to explain an (infinite) collection of particular existence facts: that this thing exists, that that thing exists, etc., or are we attempting to explain the global fact that things exist. Exactly the same infinitely regressing ontology can be vicious or benign depending on one’s theoretical lights. Notice the similarity to the discussion in section 1.3 of Bradley’s and McTaggart’s regresses: once again, whether or not there is something inherently objectionable to an infinite regress may depend on our theoretical ambitions.

4. Regress and Theoretical Virtues

Bliss and Priest, as we have seen, argue that while an ontological infinite regress might leave some questions unanswered, there is nothing inherently objectionable, incoherent, or inconsistent in an infinite regress of things each of which is ontologically dependent on the next. However, even if such ontological infinite regresses are possible, some metaphysicians argue that we may have good reason to think that the actual world is not like that.

Nolan (2001) and Cameron (2008) argue that considerations of theoretical parsimony can lead us to reject ontological infinite regresses even if such regresses are not metaphysically impossible. A theory that yields an ontological infinite regress of course thereby yields an infinite ontology. Even if we are not in the situation (discussed above in section 1.2) where we have independent knowledge that we are dealing with a finite domain, this could still be a mark against the theory, simply on the grounds that it is an unparsimonious ontology. So for example, we might object to the claim that material objects are gunky—with each part of them being divisible into further proper parts—not because there’s any inconsistency in the hypothesis, or because it leads to an infinite chain of ontological dependence and thereby leaves the existence of all things unexplained, but simply because it means that whenever there is an object somewhere, there are in fact infinitely many objects there. A gunky world is an ontologically extravagant world, and so we have the same kind of reason to reject the hypothesis that things are gunky as we have to reject needlessly complex hypotheses about how things behave (e.g., the Ptolemaic theory of planetary motion with its epicycle upon epicycle): other things being equal, we should prefer simpler theories and more economical ontologies over complex theories and more expansive ontologies.

To illustrate, Nolan considers the famous example of someone suggesting that the Earth is held up by resting on the back of a giant turtle, which is in turn held up by resting on the back of another turtle, which is in turn … and so on, turtles all the way down. There does not appear to be an inconsistency hiding in this regress, nor does it thwart an attempt at analysis. For all we know, space is infinite, so there’s no problem fitting all these turtles in, so we’re not in a case where we know independently that we’re dealing with a finite domain. This is not a regress that involves ontological dependence, so there are no concerns about the existence of things going ungrounded. Of course, we have pretty good empirical evidence that the infinite turtles hypothesis is false, since we have gone into space and can’t see any world turtles. But even putting that aside—let’s suppose we’re considering the hypothesis prior to our going into space—there is something intuitively weird about the turtles hypothesis. Nolan suggests it is the ontological extravagance of the view. He says (2011, 534–5)

[T]he two turtle theory [the world rests on a turtle, which rests on another turtle, which is unsupported] is stranger and more absurd than the one turtle theory, the three turtle theory worse than the two, a twenty-eight turtle theory worse even than the three, an exactly seven million turtle theory loonier still, and so on. The infinite turtle theory, while perhaps more motivated than the finite turtle theories, seems to be in some respects the limit of an increasing sequence of absurdity.

Not everyone will agree that each additional turtle theory is more objectionable than the last, since the extra things being postulated are of the same kind (world turtles) as the previous theory already countenanced. David Lewis (1973, 87), e.g., held that while we should prefer theories that are more qualitatively parsimonious—they postulate fewer kinds of thing than their rivals—there is no reason at all to prefer theories that are merely more quantitatively parsimonious—they postulate fewer things of the same kind as their rivals. This is controversial, however, and Nolan (1997) argues that quantitative parsimony is a genuine reason to prefer a theory. If Nolan is correct, the four turtle theory is indeed worse than the three turtle theory, the ten turtle theory worse still, and the infinite turtle theory worse (other things being equal) than any finite turtle theory.

Ross Cameron applies considerations of theoretical parsimony to the case of infinite chains of ontological dependence. While allowing that there is no impossibility in an infinite regress of things, each ontologically dependent on the next, Cameron argues that we can still have reason to reject such a theory on parsimony considerations. Cameron (2008, 12) says that what needs to be explained is the existence of each dependent entity; and while he allows that in an infinitely descending chain of ontologically dependent entities, there is an explanation for why each dependent entity exists, there is no single explanation for why all the dependent entities exist. Whereas if there is a collection of fundamental entities on which all the dependent entities ultimately depend, these fundamental entities provide a single unified explanation for why every dependent entity exists. Either way, everything that needs to be explained gets explained, but Cameron says we have reason to prefer the unified explanation over the infinitely many disparate explanations, since it is in general a theoretical virtue to provide a unified explanation. For example, a physical theory that postulates one unified force to explain all phenomena would, other things being equal, be preferable to one that postulates four fundamental forces—gravity, electromagnetism, strong nuclear, and weak nuclear—even if the two theories explain exactly the same phenomena. (Orilia (2009) argues, contra Cameron, that there is no unified explanation provided by Metaphysical Foundationalism over theories with infinite ontological descent.)

If Nolan and Cameron are right it at most gives us a pro tanto reason to reject a theory that leads to an ontological infinite regress. Just as we can justifiably accept the more complex hypothesis over the simpler one because the more complex hypothesis is more powerful (e.g.), so we might justifiably accept an ontological infinite regress because there is some virtue afforded by the theory that makes the cost worthwhile. Relatedly, Cameron (2008, 13–14) argues that this would not give us any reason to think that ontological infinite regresses are metaphysically impossible, at most it gives us a reason to think they are not actual, which limits the positions that such regress arguments can be used to argue for.

5. Transmissive and Non-Transmissive Explanations

In section 3 we considered the suggestion that if the explanation of the \(F\)-ness of each \(X\) appeals to another \(X\) that is \(F\), and so on ad infinitum, then while the \(F\)-ness of each individual \(X\) can be accounted for, something is left unexplained: why there are things that are \(F\) at all. But arguably, not every infinite regress leaves even this global fact unexplained.

To say merely that the \(F\)-ness of each \(X\) is explained by appeal to another \(X\) that is \(F\) leaves open a crucial question: does the \(F\)-ness of the new \(X\) play a role in the explanation of the \(F\)-ness of the initial \(X\)? Following Bob Hale (2002), let’s distinguish between transmissive explanations of \(F\)-ness, in which the \(F\)-ness of \(X_2\) plays a crucial role in explaining the fact that \(X_1\) is \(F\), the \(F\)-ness of \(X_3\) plays a crucial role in explaining the fact that \(X_2\) is \(F\), and so on, from non-transmissive explanations of \(F\)-ness, in which the \(F\)-ness of \(X_1\) is explained by facts concerning \(X_2\), which is in fact \(F\), but where the \(F\)-ness of \(X_2\) is not crucial to explaining the \(F\)-ness of \(X_1\), and so on. Arguably an infinite regress of transmissive explanations of \(F\)-ness leaves unexplained why anything is \(F\) in the first place, but an infinite regress of non-transmissive explanations need not.

Simon Blackburn (1986) argued that any realist attempt to explain why there is necessity in the world would fail because it faced a dilemma. Suppose we say that \(A\) is necessary because \(B\). \(B\) at least has to be true, but is \(B\) itself necessary? Blackburn thought we could not answer no, because to explain the necessity of a necessary truth \(A\) by appeal to a contingent truth would undermine the necessity of \(A\). But to answer yes is to invite regress, for now we need to explain the necessity of \(B\) by appeal to a necessary truth \(C\), and so on ad infinitum. Now, whether the contingency horn is indeed vicious is debatable (see Hale 2002 and Cameron 2010 for discussion), but focus on the regress involved in the necessity horn. Hale (2002) argues that the realist about necessity can resist getting into a vicious regress by distinguishing between transmissive and non-transmissive explanations of the necessity of any given proposition. Grant that the necessity of \(A\) can only be explained by a proposition, \(B\), that is itself necessary: that explanation will be transmissive if the necessity of \(B\) plays a role in explaining the necessity of \(A\), otherwise it will be non-transmissive. Hale (ibid., 308–309) offers as an example of a transmissive explanation of the necessity of \(A\) a proof of \(A\) with the necessary truth \(B\) as its sole premise. If \(A\) follows logically from a necessary truth, then \(A\) itself must be necessary. And in this case, the necessity of \(B\) plays a crucial role in the explanation, for if I do not know whether \(B\) is necessary, knowing that it entails \(A\) does not tell me whether \(A\) is necessary, for a contingent proposition can follow from another contingent proposition. In that case, \(A\)’s necessity seems to be hostage to \(B\)’s necessity, and so the ultimate explanation of \(A\)’s necessity will seemingly involve whatever explains \(B\)’s necessity, which is where Blackburn senses regress. But Hale thinks there can also be non-transmissive explanations of necessity. He suggests: ‘Necessarily the conjunction of two propositions \(A\) and \(B\) is true only if \(A\) is true and \(B\) is true’ because ‘conjunction just is that binary function of propositions which is true iff both its arguments are true’ (ibid., 312). While he grants that the explanans in this case is necessary, Hale thinks that its necessity is no part of the explanation. All that is needed to explain why it is necessary that \(A \amp B\) is true only if \(A\) and \(B\) are both true is the truth of the fact that conjunction just is that function which is true iff both its arguments are true. It may be necessary the conjunction is that function, but that is not part of the explanation of the original necessity, and thus the necessity of the explanans is not appealed to in a way that makes the success of the explanation dependent now on explaining this further necessity. Perhaps there is a new question to be asked concerning why this further claim is necessary, but our original explanation stands independently of whether we can successfully answer this new question.

Another example. Epistemic Infinitists embrace the infinite regress of reasons and argue that it is not vicious (see, e.g., Aikin 2005, 2011, Klein 1998, 2003, Peijnenburg 2007 and Atkinson & Peijnenburg 2017). One response for the Infinitist to make to the regress argument is the response from section 3: to hold that each belief is justified in virtue of the next one being justified, but to claim that this is not a problem: that while the regress means we do not have an explanation for why anything is justified in the first place, this is not what the sequence was meant to explain—all that we need is an explanation for each belief concerning why it is justified, and this we have. (See e.g., Aikin 2005, 197 and Klein 2003, 727–729.)

However, the Infinitist may also simply deny that anything remains unexplained in such a regress. Peter Klein (1998, 2003) holds not only that there can be an infinite regress of justifications, but indeed that it is necessary for \(S\) to be justified in believing that \(p\) that \(S\) have available to them an infinite number of propositions such that the first of these, \(r_1\), is a reason for \(p\), the second, \(r_2\), is a reason for \(r_1\), the third, \(r_3\), is a reason for \(r_2\), and so on ad infinitum. The regress objection seems to presuppose that \(r_1\) is a reason for \(p\) in virtue of (at least in part) \(r_2\) being a reason for \(r_1\), etc. (See e.g., Gillett 2003, 713.) In which case, so the objection goes, the justificatory chain could not get off the ground, and nothing would be justified. But Klein (2003, 720–723) denies that \(r_1\) is a reason for \(p\) in virtue of \(r_2\) being a reason for \(r_1\). There must be a reason, \(r_2\), for \(r_1\), and there must be a distinct reason for \(r_2\), \(r_3\), and so on. Each of these infinitely many propositions are justified, but each one’s being justified, says Klein, does not hold in virtue of any other being justified. The Infinitist demands that there is an infinite justificatory sequence, but that in itself is silent as to what justification consists in. The Infinitist can simply hold that there is some feature \(F\) such that for \(x\) to be a reason for \(y\), \(\langle x,y \rangle\) has to have \(F\), and that \(\langle r_1,p \rangle\) has \(F\), and \(\langle r_2,r_1 \rangle\) has \(F\), and so on. That feature could be the first element increasing the objective probability of the second, or something else entirely (ibid., 722). The point is, it needn’t involve the second element itself being justified, and thus the Infinitist need not accept that the justification of \(p\) from \(r_1\) is inherited from the justification of \(r_1\) from \(r_2\), and so on. In Hale’s terminology, the explanation of the justification of a proposition by appeal to another justified proposition is a non-transmissive one: while one must appeal in the explanation to a proposition that is in fact justified, the fact that it is justified plays no role in that explanation. Thus there is, arguably, no reason to think that the regress of epistemic justification is vicious even if you demand an explanation for why any of our beliefs are justified in the first place.

It’s worth reflecting on the difference between the epistemic regress and the ontological regress of dependent entities that makes Klein’s response here possible. Suppose that \(A\) is ontologically dependent on \(B\) and \(B\) ontologically dependent on \(C\). It is very plausible that in this case, \(C\)’s existence and/or nature is part of the explanation of \(A\)’s existence and/or nature. In saying that \(A\) is ontologically dependent on \(B\) we are saying that \(A\) exists, or is the way it is, at least partly in virtue of \(B\)’s existence and/or nature. So \(B\) has to exist, or be the way it is, in order for \(A\) to exist, or be the way it is. In explaining \(A\)’s existence/nature, we are appealing to \(B\)’s existence/nature, in which case it seems that anything that we need to explain \(B\)’s existence and/or nature—in this case \(C\)’s existence/nature—must ultimately be part of the explanation of \(A\)’s existence and/or nature. That’s why when we have a chain of ontological dependence, the existence and/or nature of the first entity seems to be ultimately dependent on not just the existence/nature of the second entity in the chain, but on that of every subsequent entity in the chain: explanations of being appear to be transmissive. Which is why, if the chain is endless, we seem to lack an explanation as to why anything exists at all.

But while it is overwhelmingly plausible that \(B\) can only serve as the ontological ground of \(A\) because \(B\) itself exists, or is the way it is, it is not forced on us to hold that \(r_2\) can only be a reason for \(r_1\) because \(r_2\) is itself justified, and this is why Klein’s response to the epistemic regress is available. It need be no part of the explanation for why \(r_2\) is a reason for \(r_1\) that \(r_2\) itself be justified. As Klein says, the entirety of the explanation for why \(r_2\) is a reason for \(r_1\) might be simply that the objective probability of \(r_1\) given \(r_2\) is sufficiently high. That fact does not involve \(r_2\) being justified. Klein thinks that \(r_2\) must be justified if \(r_1\) is, but that need not be any part of why \(r_2\) is a reason for \(r_1\), and thus there is no pressure to hold that the justification of \(r_1\) by \(r_2\) is dependent on, or inherited by, the justification of \(r_2\) by \(r_3\), and so on. So while there is indeed an infinite sequence of propositions, each of which is a reason for the previous one on the list, at no stage is the fact that one proposition is a reason for another hostage to the fact that any other proposition is a reason for another. And so arguably, nothing remains unexplained: there can be a good explanation not only for why each particular proposition is justified (\(r_5\) is justified by \(r_6\), etc.), but also for why there are any justified propositions in the first place (there are propositions that raise the objective probability of others, e.g.).[12]

6. Coherence, Circularity, and Holism

The Coherentist resists regress by allowing a circular or holistic explanation of the \(F\)-ness of at least some \(X\)s. This could be to simply allow straightforwardly circular explanations, such as that \(X_1\) is \(F\) in virtue of \(X_2\) being \(F\) and \(X_2\) is \(F\) in virtue of \(X_1\) being \(F\). But that is not the only option for the Coherentist. Consider again the regress argument concerning justification of belief: our belief \(p_1\) is justified by appeal to \(p_2\), which is in turn justified by appeal to \(p_3\), and so on. There is an assumption behind this regress: that what is to be explained are the facts concerning the individual beliefs—why is this one justified, then why is that one justified, etc. An epistemic Coherentist such as Bonjour (1985) rejects this assumption. It is not, primarily, individual beliefs that are justified, it is systems of belief. A particular belief is justified only in a derivative sense, by belonging to a justified system. A system of belief is justified because of the properties of the system as a whole, namely that the beliefs in it form a coherent system. Thus, justification is a holistic phenomenon: a collection of beliefs is justified because of what they, collectively, are like, not because of what each individual member of the system is like. This holistic explanation of where justification comes from is very different from a circular explanation: a circular explanation tells us that one individual’s being \(F\) explains another’s being \(F\) and also vice versa, but a holistic explanation tells us to abandon the idea of explaining an individual’s being \(F\) by appeal to another individual being \(F\), and instead hold that the explanation for some things being \(F\) can be the facts concerning what that collection of things as a whole is like.

Sometimes a circular explanation might be warranted because we are not trying to explain in virtue of what the \(X\)s are \(F\), but rather we are merely attempting to illuminate the \(X\)s being \(F\) by showing how the \(X\)s relate. Consider the regress argument against the thesis that time passes given by J.J.C. Smart (1949, 484):

If time is a flowing river we must think of events taking time to float down this stream, and if we say ‘time has flowed faster today than yesterday’ we are saying that the stream flowed a greater distance today than it did in the same time yesterday. That is, we are postulating a second timescale with respect to which the flow of events along the first time dimension is measured … Furthermore, just as we thought of the first time dimension as a stream, so will we want to think of the second time dimension as a stream also; now the speed of flow of the second stream is a rate of change with respect to a third time dimension, and so we can go on indefinitely postulating fresh streams without being any better satisfied.

Here we start with our ordinary temporal dimension—what we may have supposed to be the only temporal dimension—the one philosophers have in mind when they say that time passes. Smart supposes that if this dimension of time indeed passes then there must be a rate at which it passes.[13] Whether that rate can slow down or speed up or if time always flows at the same rate is not important, but there must be some rate at which it passes, thinks Smart. Now, just as we would measure the speed of a car, say, by measuring how much distance it covers in a given amount of time, so, thinks Smart, we would have to measure the speed at which time itself passes by measuring how much time passes in a given amount of time of some second temporal dimension. While the car covers forty miles of road in the space of an hour, an hour of time passes in the space of, say, two hours of this second temporal dimension. But then, how fast does this second dimension of time pass? We need a third temporal dimension to measure how long it takes for an hour of the second temporal dimension to pass. But how fast does the third temporal dimension pass? And so on, ad infinitum. Smart concludes that time does not pass.

A defender of the view that time passes could attempt to resist Smart’s regress by cutting off the regress at the second stage by claiming that there is a principled difference between the first temporal dimension and the second that results in the first temporal dimension passing at some rate, but not the second. Smart himself concludes that time does not pass, so can hardly object to the postulation of temporal dimensions that do not pass. However, Smart’s regress can be resisted without abandoning the principle that all temporal dimensions pass at some rate.

Ned Markosian (1993) points out that to give a rate is to compare two different types of change. When we say that the car travels at forty mph, we are comparing one type of change—the car started off in one place and ended up forty miles distant—with another—it was one time and it is now an hour later. In that case, we can always get the rate of the second type of change by comparing back to the first. As Markosian says (ibid., 842): “If … I tell you that Montana’s passing totals increased at the rate of 21 passes per game, then I have also told you that the games progressed at the rate of one game per 21 completions by Montana.” So suppose we say that the first temporal dimension passes at a rate of one hour for every two hours of the second temporal dimension; there is no need to invoke a third temporal dimension in order to state the rate at which the second passes, for we have already given that rate: an hour of the second temporal dimension passes for every half hour of the first. Indeed, as Markosian points out, we need not even invoke a second temporal dimension, for any time we give a rate of any ordinary process with respect to time—such as that the Earth goes around the sun once every year—we have thereby stated the rate at which time passes with respect to those ordinary processes: time passes at the rate of one year for every orbit of the Earth around the sun.

Markosian’s maneuver is possible because in giving the rate of one process of change by appeal to a second process of change we are not saying what makes it the case that the first change occurs at the rate it does. What makes it the case—what are the ontological grounds of the fact—that the car travels at 40 mph? Not that an hour of time passes while the car moves a distance of 40 miles, for that is merely a re-description of the fact in question: a way of describing the rate. In the case of time itself, defenders of the view that time passes may plausibly claim that what makes it the case that time passes is simply the nature of time: that it is in its nature to pass at the rate it does. It doesn’t pass at the rate it does because of some relation it stands in, either to ordinary processes of change or to a second temporal dimension. Such relations allow us to informatively state the rate of change, they do not provide the grounds for it.

If we were providing the metaphysical grounds of rates of change, Smart might be right that this would lead to a vicious regress, since arguably grounding is asymmetric (see e.g., Rosen 2010, 115). If the car travels at the speed it does in virtue of something to do with the passage of time then, arguably, time cannot pass at the rate it does in virtue of anything to do with the speed of the car, and so we need to appeal to the passage of a second temporal dimension to provide the ontological grounds of the rate of passage of the first. If the passage of this second temporal dimension grounds facts about the passage of the first temporal dimension, it itself cannot pass in virtue of facts concerning the passage of the first, and so we need to appeal to a third temporal dimension, and so on. And even if we hold that it’s possible for there to be infinitely descending chains of grounds, it seems absurd in this case to suppose that there are in fact infinitely many temporal dimensions. But that is not what is going on. When we explain the speed of the car by appeal to the passage of time, we’re not providing the ontological grounds of its speed, we’re simply showing a connection between two things: the movement of the car and the passage of time. Likewise for the rate of time’s passage itself: we are not seeking to provide the ontological grounds of time’s passage in stating its rate, for the ontological grounds are plausibly just the nature of time itself. Rather, when we compare the two changes, we are simply trying to illuminate one or both of those changes by pointing to the way they relate. That is the only sense in which one explains the rate of one change when comparing it to another kind of change: the mutual connection tells us something enlightening about each. It is not to give a metaphysical explanation in the sense of providing the metaphysical grounds of either rate of change. Compare: if I tell you that the value of a US dollar is 0.7 British pounds (and therefore that the value of a British pound is 1.43 US dollars), this is not to say that the US dollar has the worth that it has in virtue of standing in this relation to the British pound. The US dollar relates thus to the British pound because of what they are each worth; they are not worth what they are in virtue of standing in that relationship. What makes it the case that the US dollar is worth what it is is some incredibly complex set of facts concerning economics, monetary policy, etc. To give an exchange rate is not to give the grounds of the value of the currencies, but merely to say something substantive about each value by stating their connections. Likewise with rates of change, which is why Markosian is able to resist Smart’s regress in this manner. Coherentist explanations might be controversial when it comes to providing ontological grounds, but they are less so when it comes to simply casting light on the nature of some phenomena by showing how they connect. So whether a regress argument even gets going will depend on the explanatory ambitions of the view being targeted.