I’ve become increasingly fascinated with presuppositional apologetics, a way of defending Biblical Christianity that’s gained quite a bit of popularity in recent years. This is a form of apologetics originally crafted by the Reformed theologian Cornelius Van Till, and his followers, perhaps most notably Greg Bahnsen and John Frame. However, in recent years, this approach has become quite popular among apologists on the internet who are not professional philosophers. The most vocal of these internet apologists is probably Sye Ten Bruggencate.

Rather than trying to give positive evidence for the existence of the Christian God, the pressupositionalist attempts to show that any worldview other than the Christian one is untenable. We can identify two main challenges to a secular worldview coming out of the this approach: an epistemological one and a metaphysical one. Both are aimed at producing a reductio ad absurdum for a secular worldview. The epistemological challenge asks how, on a secular worldview, one can be justified in believing basic truths and basic rules of inference. The metaphysical challenge asks how one can account for or explain things like truth and logic on a secular worldview. In my previous post, I tried to give a response to one popular aspect of the metaphysical challenge, how a naturalist can account for the laws of logic. Here, I want to give a thorough response to the epistemological challenge, as it’s presented by Sye Ten Bruggencate.

Sye’s strategy is to try to ask a set of questions aimed to show that, apart form the Christian God, no one can have knowledge of anything. While I am a (very liberal) theist, I think our epistemological views can stand just fine on secular terms, and so, naturally, I believe the presuppositional strategy is flawed. But my goal here isn’t simply to show why Sye’s presuppositional strategy is flawed, since the best way to do that would most likely be to attack the positive claims that Sye puts forward as differentiating the biblical believer from the secularist. In fact, actually playing Sye’s game and trying to straightforwardly answer all his questions is probably the worst thing to do in an actual debate. But, I find responding directly to Sye’s attacks on the secular worldview a particularly interesting philosophical exercise. So here we go:

A Hard Line of Interrogation

Sye follows a script pretty straightforwardly, so, while I’ve never actually had a back-and-forth with him, I’ve seen enough of his debates to have a pretty good grasp of his line of attack. Sye’s first question usually looks something like this:

1.) Is it possible that you could be wrong about everything you claim to know?

My answer to this question, unlike many people who have thought about it, and unlike many atheists who have interacted with Sye is, “No, it’s not possible that I could be wrong about everything I claim to know.” But before I explain why I think this is the correct answer (and what exactly it means to make this claim), I want to say a bit about the opposite answer. If you answer this question affirmatively (as most people do), Sye quickly responds,

But you see, there’s a problem there. If you could be wrong about something, then you don’t really know it. For example, if I say, “The speed limit outside is 25 miles per hour, but I could be wrong,” I certainly don’t know that the speed limit is 25mph. And so, on your worldview, since you could be wrong about everything, you don’t know anything. You’ve given up knowledge! But you can’t do that, because that’s a knowledge claim, and so you contradict yourself!

It’s worth noting that Sye might just be straightforwardly wrong about this speed limit example. Think about it for a moment: Suppose you’ve lived in this neighborhood your entire life. You drive past the speed limit sign every day, and it’s always said 25mph. You’ve also gotten pulled over once and received a ticket for going 35mph, ten over the speed limit. This was all made perfectly explicit to you when you went to court. Even further, you know that it’s in a residential neighborhood, and it’s a law in your state that the speed limit in these neighborhoods has to be 25mph.

Still, you’re not looking at the sign right now, and there is an infinitesimally small possibility that within the past few hours, the state legislature changed and the sign was replaced with a 35mph speed limit sign. This would be an incredibly strange thing to happen, especially without any notice and for no apparent reason, but you can’t rule out the possibility with one hundred percent certainty. Does this minute possibility mean that, if someone asks you the speed limit in your neighborhood, you wouldn’t be perfectly justified in saying that it is 25mph? No of course not; you’re perfectly justified. And, assuming that this infinitesimally small possibility doesn’t obtain, it seems correct to say that you know it is 25mph. If, somehow, against all odds, the possibility did obtain, then you wouldn’t know that the speed limit was 25mph. But that wouldn’t be because your belief isn’t justified, it’d be because your belief is false, and it’s justification that Sye is trying to attack at this point, not truth.

Fallibilism about knowledge, the view that we can know things without being entirely certain of them, is actually the majority view among epistemologists. It might seem a bit counterintuitive when you first stumble upon it, but I think it makes quite a bit of sense when you think about it for some time. The reason why the minute possibility of being wrong with the speed limit example doesn’t preclude you from having knowledge, is because this possibility is so far out there that it not a possibility that you need to consider when assessing your claims to see if they are knowledge. So, for all intents and purposes, you are certain that it’s 25mph. Suppose then Sye asks “But could you be wrong about fallibilism?” One perfectly respectable answer here is to say “Sure I could. It seems very clear to me that fallibilism is the correct view of knowledge, but I’m not one hundred percent positive. I’m open for an argument for why it isn’t the correct view.” This fits perfectly into the fallibilist view, so there’s no inconsistency here.

But let me step back to give my actual answer. (Sye demands that you don’t just give plausible answers, but you have to give the answers you in fact hold! He’s not just doing this for intellectual enjoyment, after all, but to save your soul!) Consider again the statement that Sye wants to bring into question: “It is possible that you could be wrong about everything you claim to know.” There are two different ways we might express this statement. The first is to say that, for all of our beliefs, it’s possible that each one might be wrong, which we’d symbolize like this: ∀x(◊Wx). The second is to say that it’s possible that all of our beliefs, collectively, could be wrong, which we’d symbolize like this: ◊(∀x(Wx)). These are two clearly distinct claims, and the second does not logically follow from the first. For example, it does not follow from the claim that it’s possible for anyone to be president (for all people, it’s possible that they might be president), to the claim that it’s possible for everyone to be president. The second expression is much stronger, and thus, much easier to reject, than the first.

Whether he’s aware of it or not, Sye equivocates between these two expressions, and the trap that he tries to set rests largely on this equivocation. It seems that the latter expression is the only one that, if true, has a serious skeptical consequence. And, since, if you accept the statement Sye want to conclude that there is nothing at all that you do know, it seems that he has the second expression in mind. However, if you reject the statement Sye responds “Ok, what do you know for certain.” But this response is only appropriate if Sye is asking about the first expression. If I say that it’s not possible for everyone to be president, I don’t need to name a particular person who I know certainly can’t be president; I only need to say that the nature of presidency is such that only one person can be president at a time.

Since Sye needs the stronger skeptical conclusion in order for his argument to work, the thing he must really be asking is not whether there is some particular thing that we can’t be wrong about, but whether we can be wrong about everything. To this question, I respond, “No, it’s not possible that I could be wrong about everything I claim to know.” My main line of reasoning behind this response is drawn from a set of arguments from Donald Davidson and Daniel Dennett. These arguments don’t go to show that there is some particular thing we know for certain, but rather that the very idea of being wrong about everything makes no sense at all.

Consider the following example: I am shipwrecked and find myself on an uncharted island with an unknown native population. I stumble across one such native whose language is entirely foreign to me. To understand what he is saying, and correlatively, what his beliefs are, I must take an interpretive stance towards him and see how he responds to stimuli in his environment. This three-part relation between interpreter, speaker/believer, and shared stimulus is what Davidson calls “triangulation.” In order to make sense of the native’s beliefs at all, I must attribute to the native the beliefs that he ought to have, and, in doing this, I must attribute mostly true beliefs to him.

Davidson’s idea is that the very notion of a having a belief only makes sense in this context of triangulation, and within this context, one must have mostly true beliefs. We can relate Davidson’s point to a point that Dennett makes, that, in order to treat something as having beliefs at all, we must treat it as a rational being. This is because, to treat something as having beliefs, we must predict it as acting in accordance with its beliefs and goals. If all of our predictions fail, then there’s no meaningful sense in which we can say that it has beliefs at all. Having mostly true beliefs and being largely rational is, in fact, a “presupposition” of having any beliefs to call into question in the first place.

Just like, once we understand what the concept of the presidency means, we know that it’s impossible for everyone to be president, once we sufficiently understand the concepts of belief and knowledge, we realize that it’s impossible for all of them to be false. This also provides a response to one of Sye’s other questions that he sets up as a trap:

2.) Could there be someone who cannot reason rationally?

If you respond affirmatively, as most people do, then Sye will ask you how you would know you’re not one of those people. And then, no matter what you say, he’ll respond “But you see, you’re using your reasoning there, and that presumes you’re not one of those people. Thus, you can’t prove that you can reason rationally.” But the answer, if Davidson and Dennett are roughly correct, is no, there could be no such person.

Of course, it’s possible for a human being to not have any beliefs at all, perhaps someone in a cationic state. It’s also possible for a human-being not have capacities to engage in the practice of reason-giving, such as a feral child. But, for any person who actually reasons at all, their reasoning must be roughly in conformity with the norms of rationality. Reasoning just is the word we have for the process of forming inferences in accordance with these norms. Now, we can make mistakes in reasoning, and some people make more mistakes than others. But the very notion of treating someone’s judgment as a mistake presupposes that they are in fact bound by these norms.

It is likely that, after giving any substantive answer to one of his questions, Sye will try to eliminate any force the point might have by asking:

3.) But could you be wrong about that?

The answer to this question, if we are answering them in this sort of way, goes like this: Well, in one sense yes, in one sense no. We have to make a distinction between metaphysical and epistemic possibility (this is quite a useful distinction when dealing with presuppositionalists like Sye). Metaphysically, the argument is such that, if it’s correct, it’s necessarily correct. And since I’m putting it forward as something that is in fact correct, I’m putting it forward as some that is necessarily correct. Davidson’s argument, much like the presuppositionalist’s in fact, is a transcendental argument. It aims to show that the conditions for having beliefs at all are that most of our beliefs must be true. So, on the picture that I’m putting forward, the picture that I’m taking to be correct, it is impossible for our beliefs to be massively false.

Still, this general way of thinking about belief and rationality is not entirely uncontroversial, and since there’s plenty of smart people that disagree with me, I think it’d be a bit arrogant to say that I know for certain that it is correct. And so, I’d say that it’s epistemically possible that I might be wrong in thinking that Davidson’s argument shows most of my beliefs must be true. This is to say that, while I believe it’s the right way of thinking about belief and I can’t see how I could be wrong, I’m open to an argument for why I might be wrong. Here I’d ask the Sye, “Do you have an argument which would suggest that this is in fact the wrong way of thinking about belief?”

Perhaps at this point Sye would bring up a sort of Cartesian skepticism, and ask something like:

4.) Isn’t it possible that you’re just in the Matrix, and thus massively deluded?

This is an interesting challenge to deal with. We might the write up this challenge as the following argument:

It’s possible that I’m in the Matrix. If I’m in the Matrix, all of my beliefs are false. Therefore, it’s possible that all of my beliefs are false. If it’s possible that all of my beliefs are false, I do not know anything. Therefore, I do not know anything.

I think this argument is formally valid, and so in order to reject the argument I’ll have to reject one or more of the premises. Now, given that I’ve already rejected premise (3), and (3) follows logically from (1) and (2), I have to reject either (1) or (2). Since I see no prima facie reason to think (1) is impossible, I think we ought to reject (2), and I don’t think it is too difficult to do this.

David Chalmers has made the case, quite convincingly I think, that the objects of belief for people in the Matrix are virtual objects. That is, when a person in the Matrix has the belief “I am petting a cat,” they have a belief about a virtual cat. Virtual objects are, in fact, very similar to non-virtual objects but differ in the fact that ultimately they are not composed of tiny bits of matter but are produced by computer code. So, when someone in the Matrix says “cat,” it refers to a virtual cat, and, since there are virtual cats in the Matrix, the sentence “My Aunt has three cats,” may very well be true when uttered by someone in the Matrix.

On this line of reasoning, even if we are in the Matrix, our beliefs about the objects around us are still mostly true, and we still have a basic grip on what sorts of things these objects are, even if we’re wrong about the underlying metaphysical structure. Chalmers, accordingly, calls the Matrix hypothesis a metaphysical, rather than skeptical hypothesis. He sees the hypothesis as not categorically different from fundamentalist religious hypotheses, a “creation myth for the information age.”

This sort of explanation is exactly what we’d expect, given that we’ve taken Davidson’s arguments that most of our beliefs must be true to be transcendental ones. If that’s the case, they can’t be contingent on the way the world happens to be, and so they ought to hold fast even in the Matrix. And they do. Given this explanation of the Matrix scenario, the arguments from Davidson and Dennett which say that we must have mostly true beliefs, still hold even here.

Once Sye gets stumped, he often goes back to

5.) But you’re using your reasoning to justify your reasoning! That’s viciously circular!

To this, once again, I respond, that reasoning isn’t the sort of thing that can be justified. It’s not as if I have my reasoning, and you have your reasoning, and mine might be right and yours might be wrong; reasoning just is the thing rational agents do that makes us rational agents. To reason, just is to engage in this activity of responding to reasons, an activity that we are all necessarily in.

Now, of course, I can use reasoning in order to know if I’m being rational in a particular circumstance or another, but it makes no sense to try to use it to know if I’m rational generally, since the very possibility of using reasoning presumes that I am rational agent, a thing with the capacity of reason.

Somehow, Sye has taken this fact to be some sort of great epistemological problem, but it just means that the question is malformed. Since the question of justification only makes any sense at all on the assumption that one has the capacity to reason, insofar as Sye can even ask me for justification for anything, the notion that I might not be able to reason is incoherent. The reason why I can’t coherently ask my goldfish to justify its reasoning is because it’s not a thing that reasons in the first place. Asking something this question and expecting that it might even attempt to answer it, presumes that it can reason, and thus, the question would not need to be asked in the first place.

6.) But could you be wrong about that?

Once again: epistemically, yes, metaphysically no. According to the way I’m thinking of these things right now, I couldn’t be wrong because I think the contrary is incoherent. However, if you’d like to show me how the contrary is not incoherent, I’d certainly be willing to listen.

7.) But you’re using your reasoning to justify your reasoning.

In straightforwardly repeating the challenge, without putting forward a positive reason, Sye is giving me nothing new to work with. So rather than just repeating my claims forever, it seems more productive to look at what is going on here. Sye’s goal here is to lead one into the epistemic regress problem. Traditionally, the problem might be set up like this:

I want to say my belief that P is justified. If this is the case, then there must be some other belief (call it P’) which gives me reason to believe that P. But then I need another belief (call it P’’) that gives me reason to believe P’ and so on ad infinitum.

Traditionally, there are three standard responses to this problem:

Foundationalism: the idea that at some point this regress terminates with basic beliefs for which we do not need to give reasons. The problem was originally posed as the main motivation to hold a foundationalist picture of justification. Coherentism: the idea that reasons can loop back on themselves, that justification isn’t completely linear, and a belief is justified just in case it fits into the most coherent justificatory network. Infinitism: the idea that justifications of this sort can unproblematically go on forever (as far as I know Peter Klein’s basically the only person who holds this view).

While there might be defensible versions of all of these views, I will not defend one here. Rather, I want to point out that there is also another line of thought that rejects the formulation of regress problem as having basically gotten things backwards. The idea is this: the very notion of justification only makes sense in light of our practices of justification, and at a certain point, any justification will just come down to what our practices actually are.

This is a point that we can find at some points Wittgenstein’s work. In the Philosophical Investigations, Wittgenstein writes, “Once I have exhausted the justifications, I have reached bedrock, and my spade is turned. Then I am inclined to say: ‘This is simply what I do.’” There are various ways of reading this thought-provoking passage, but one way we might read Wittgenstein here as saying that, after a certain extent of justification, all we ought to do is just refer to what the practices that we are mutually engaged in, which make knowledge and justification possible at all, actually are. This is importantly different than any sort of foundationalism, since an appeal to actual practice isn’t an appeal foundational belief at all.

We cannot, of course, get outside of our practices in order to justify them. But we can, from inside our practices, explain why they must be mostly truth-conductive, and that’s what I’ve tried to do with the arguments from Dennett and Davidson. Even stronger, we can say that we must think of our practices as mostly correct in order for them to make any sense at all. This would be to say that, by the very pragmatic structure of how our reason-giving practices work, our inferential practices and accordingly our beliefs must be treated as mostly be right. Thus, in response to the regress problem, we can say that there is no reason to think that every belief needs an explicit justification, since that’s just not the way our practices of reason-giving work.

We can understand this point in terms of what Robert Brandom calls the “default and challenge” model of entitlement. In this model, claims and beliefs be treated as justified by default, and only once appropriately challenged is there an obligation to bring forth explicit reasons for them. We might think of default and challenge as a sort of “innocent until proven guilty” principle with regard to whether one is entitled to make a particular claim or perform a certain practice. Without treating claims in this way, according to Brandom, there is no way to get the “game of giving and asking for reasons” up and running. If we doubted literally everything everyone said, there’d be no common ground from which we could make sense of any discussion at all.

Let me give an example to make this clear. In order to play soccer, for example, it is not enough that everyone else just be playing soccer; we have to think everyone else is playing soccer as well. Only then will following the rules of soccer make any sense to us. Likewise, in order to engage in rational discourse, it is not enough that everyone else just be rational; we have to think everyone else is basically rational. To try to make someone give reasons for their beliefs without presuming that they are basically rational is to verge on incoherence. It’s like trying to enforce a hand-ball penalty without presuming that the person is playing soccer!

For this reason we can say that the epistemic regress problem that Sye is trying to force us into is actually not a genuine problem at all. This line of thought is certainly contested in philosophy. But I’d love to see Sye to try to refute it. My bet is that he’ll just come back, once again, with something like this:

8.) You’re still trying to use your reasoning to justify your reasoning.

At this point, the skeptical inquiry is getting redundant. In fact, it’s getting so redundant that it’s starting to lose its meaning. And now that I’ve articulated the default and challenge model of entitlement, we can explain why this is the case. By abandoning the default and challenge model of entitlement and asking for justification, even after justification has been given, Sye is abandoning the way the norms of justificatory practice actually work. As I’ve said, we might think of epistemic practice as a game, and if you break the rules enough, you’re no longer really playing it. Unless Sye plays by the epistemic rules and accepts justifications which connect a particular belief in question to our default-entitled set of beliefs, the very notion of justification, and thereby challenge, becomes meaningless.

Since he breaks all the rules of epistemic practice, particularly the default and challenge structure of entitlement, Sye’s speech acts eventually lose all of their normative force. This is to say that, if the skeptic continues endlessly attempting to challenge any justification, his speech acts not only lose their perlocutionary force (in making one offer up a response), but their illocutionary force as well. The speech act is infelicitous as a challenge in the same way that running up to two random people on the street and saying “I now pronounce you man and wife” is infelicitous as a marriage pronouncement.

A genuine challenge, in actual epistemic practice does have normative force, and an agent whose commitment is genuinely challenged has an obligation to defend that commitment, but Sye’s speech acts stop being treated as genuine challenges, and thus his interlocutor eventually feels no normatively obligation to answer them. If you watch some of Sye’s debates you can literally see this happening.

Given that we’ve just seen how the default and challenge structure of entitlement is justified, we can now say that they ought to lose their normative force as they do, since there is no coherent challenge to be made if an appeal to our default-entitled set of beliefs will not be accepted as an answer to this challenge. We can make this explicit to Sye, and say that’s why his challenges stop being the speech act he intends them to be, no longer counting as genuine challenges. Thus, in telling Sye to get lost, we are saying to him that he is contributing nothing to conversation but mere words with no more normative force behind them than the barks of a dog.