Transcript

Rob’s intro [0:00:00]

Robert Wiblin: Hi listeners, this is the 80,000 Hours Podcast, where each week we have an unusually in-depth conversation about one of the world’s most pressing problems and how you can use your career to solve it. I’m Rob Wiblin, Director of Research at 80,000 Hours.

This interview is my single favourite episode of the show so far. And it’s not just me — Keiran and Peter McIntyre both said it was their favourite interview we’ve done to date.

David Chalmers is a philosopher I’ve admired since I was an undergrad — he’s consistently entertaining and insightful, and willing to have a go at thinking through a huge range of fundamental questions. His work on theory of mind has been hugely influential, and rightly so.

Arden and I recorded one three hour session with Dave and then decided to come back for seconds, because the conversation was going so well, and there was plenty more to cover.

As a result it’s our longest episode so far, but Keiran and I at least didn’t find any of it boring. If you want to skip to a particular section, for example the key part on theories of consciousness and their practical implications, you can always use the chapters function.

Two quick notices before that though.

First we recently put out an article that might be very interesting to podcast subscribers, called Advice on how to read our advice. We go through the 8 ways people most often misunderstand our advice, and how they should approach it instead. If you might use the show to shape your career, I definitely recommend having a read. You can find a link to that in the show notes.

Second, for the last two months we’ve been publishing advice from a dozen or so people whose careers we admire, many of whom are working on the problems we’ve been focusing on on this show.

The catch is the advice is anonymised, so the people we spoke with wouldn’t have to worry about whether their employer would be happy with what they were saying, or otherwise censor themselves for reputational reasons.

We’ve released six sets of answers so far, including responses to:

What bad habits do you see among people trying to improve the world?

How risk-averse should talented young people be about their careers?

How have you seen talented people fail in their work?

I’ll link to those in the show notes. If you enjoy this podcast I expect you’ll enjoy the insights in the anonymous advice series as well.

Alright, without further ado, here’s my colleague Arden Koehler and me interviewing one of the most cited living philosophers, Prof David Chalmers.

The interview begins [0:02:11]

Robert Wiblin: Today, I’m speaking with Professor David Chalmers. Dave is a philosopher at New York University and director of the Center for Mind, Brain, and Consciousness. He specializes in philosophy of mind and cognitive science and is interested in the philosophy of language, epistemology, metaphysics, and philosophical questions about philosophy itself.

He’s co-director of the PhilPapers Foundation, which maintains the largest catalogue of philosophical books and papers in the world with 2.4 million entries and over 200,000 users and he’s also an honorary professor of philosophy at the Australian National University where I actually happened to first meet him about 10 years ago when I was an undergraduate.

Dave is perhaps best known for his work on the philosophy and science of consciousness, and especially for his work on finding and trying to answer the hard problem of consciousness, which we’ll talk about in a minute. He’s gone on to help pioneer the new field of consciousness studies, which lies somewhere between neuroscience, psychology, and philosophy.

And he’s also just generally a very prolific author who’s written a lot of articles and given a bunch of talks on future-related topics that matter to people who potentially want to steer the direction that humanity is going in. And, I was also surprised that Dave, you grew up in the same city as me in Adelaide, Australia.

David Chalmers: Yeah, that’s right. I lived there until I was about 20 or so.

Robert Wiblin: Yeah, me too. I guess I moved away when I was 18 to go study. Guess that a lot of good people leave Adelaide is what I say.

David Chalmers: Which football team did you support?

Robert Wiblin: Oh, neither, embarrassingly. What about you?

David Chalmers: Ah, Sturt.

Robert Wiblin: Okay, right. Yeah, actually we maybe grew up in the same suburb. I grew up in Unley.

David Chalmers: Oh really? I went to Unley High School. How about you?

Robert Wiblin: Oh, I almost went to Unley High School. I actually went to Glenunga, which is like right nearby.

David Chalmers: Yeah, I grew up in Mitcham which is just south of there.

Robert Wiblin: Yeah, I went to Mitcham Primary School.

David Chalmers: Well how about that. We probably have many mutual friends.

Robert Wiblin: Yeah. Anyway, today we’re also joined by Arden Koehler, and she’s the newest member of the 80,000 Hours research team. Coincidentally, she’s actually until now been a PhD student in philosophy at NYU where she’s been studying ethics, and so she happens to know Dave in this case by being a teaching assistant in one of his undergraduate classes. Thanks for coming on the show. Arden.

Arden Koehler: Thanks. Excited to be here.

Robert Wiblin: All right, so today we hope to get to talk about a whole lot of juicy topics like the simulation hypothesis and virtual reality, some of the work on what philosophers actually do and whether they’re succeeding at it, and the ethical implications of different theories of consciousness.

But first, as I usually ask, what are you working on at the moment and why do you think it’s really important?

David Chalmers: As always, I’m working on a whole lot of things at any given time, but I guess the biggest thing I’m working on right now is trying to finish a book on philosophical issues about virtual reality and simulated worlds and trying to approach many philosophical questions through that lens.

And why is that important? Well, theoretically I think this just provides a very productive way of shedding light on some very traditional philosophical questions about knowledge of the external world, about the nature of reality and about the value of lives. And, at the same time, it raises a whole bunch of very new philosophical questions about technologies which are coming into our lives today.

Technologies involving virtual reality and virtual worlds: why is it practically important? Well, I think people are beginning to spend more and more time in virtual worlds of various sorts and it’s easy to imagine that in the future, we’re going to have at least the option of spending a whole lot of time in virtual worlds and increasingly sophisticated virtual realities.

And I think the question that’s going to arise is, “Is this actually a meaningful good way to spend one’s life, or is there something deficient about it?” I think if you’re interested in building, for example, a better or more valuable future, you want to give some attention to some of these issues about the status of virtual worlds and thinking about what makes for the best world.

The other thing I’m thinking about are general issues about consciousness and its relationship to the physical world. Consciousness is one of the most central phenomena in our lives and one of the most ill-understood, so intellectually it’s fascinating. But again, practically for you and your listeners interested in building a better world, arguably, consciousness is one of the primary determinants of the value of our lives. Some people think it’s the only determinant, but generally people believe it’s at least one of the primary determinants of what makes a life better or worse.

So if you’re trying to think about what makes for a better world and for better lives, I think you just have to think about consciousness. And I would love to see people interested in building a better world get really seriously interested in some of these issues about consciousness. To think about how focusing on different states of consciousness can indeed play a role in helping us to build a better world.

Philosopher’s survey [0:06:37]

Robert Wiblin: So back in 2009, you ran this survey with David Bourget where you surveyed quite a lot of philosophers about what they thought about current issues in philosophy.

The bottom line is that the results suggested that there’s just very little consensus among philosophers about a lot of these questions. There were some points of agreement, but a lot of points of difference, I guess. What are the main things that you think you learned from running this survey?

David Chalmers: Good question. The main things I learned were specific answers to specific questions. We sent out emails to about 2000 philosophers at a hundred odd departments of philosophy in the US, Canada, Australia, New Zealand, UK, Europe, and we got about a 50% response rate.

So we’ve got a thousand people responding, each answering about 30 questions which gave them the choice typically between two views like mind: physicalism or non-physicalism, normative ethics: consequentialism, deontology or virtue ethics, and so on. They got the option of accepting every answer or leaning towards that answer.

Or, there was a whole host of “other” options. Like the question is too meaningless to answer or that they’re insufficiently familiar with the details or it all depends what you mean by a key term and so on. Philosophers love those “other” options, but still we got enough information on a lot of these for it to be very interesting.

At a general level, it wasn’t terribly surprising. We found a lot of disagreements about the answers to these questions. Many of them ended up 50-50 Platonism versus Nominalism about abstract objects, roughly whether abstract objects like numbers exist or not, came out about 50-50. Physicalism versus non-physicalism came up about 56% for physicalism, 28% for non-physicalism.

The rest were varieties of agnostic. The biggest consensus we got was on the external world. Actually realism about the external world, that we know the external world exists. Skepticism is the position that we don’t know, or idealism, that it’s all in the mind. I think we got about 80% for realism or close to that. Maybe 5% each for skepticism or idealism.

Arden Koehler: I’m sort of relieved.

David Chalmers: If you defer to philosophers about this, we can now infer that we do know about the external world. But we actually at the same time, to quantify the degree to which we should be surprised, we took a meta-survey where we asked people for their predictions after they returned the survey in the ensuing couple of weeks. We asked people for their predictions about what the results of the survey would be.

So we could say, okay, with respect to the question, is there an analytic-synthetic distinction, that is, things which are true in virtue of the meanings of words versus things which are true in virtue of the world?

The actual answer to that question came out 70 to 30. 70% of people said there is an analytic-synthetic distinction. 30% said there’s not. But philosophers predictions were that it would come out 50-50, so actually, many philosophers had a false sociological belief about philosophers.

They tended to believe that the analytic-synthetic distinction is less popular and less widely believed than it actually is, and we got many results of that form where people underestimated or overestimated the popularity of certain hypotheses by up to about 20%. So that was the way of actually quantifying how surprised you might be about these results.

Now I think we also got to quantify people’s performance on the meta-survey. I’m pleased to say that I came out in the top five or so for my performance on the meta-survey; maybe I cheated a bit because I’d run some test surveys along the way. So I was relatively well informed about this.

How surprised was I by the results of the survey? Well, I was still surprised. The one that surprised me the most was the question about aesthetic value. Aesthetic value, is it objective or subjective? I was sure that a large majority of people would say it’s subjective. In fact, we got more people saying it’s objective.

Arden Koehler: Yeah. I’m surprised by that one too.

Robert Wiblin: Yeah. That’s one we pulled out to potentially discuss. I saw that. I just couldn’t believe it. This raises an interesting question of what should you do when you get the results of the survey and you just think that the results, like the answer is absolutely mental. It’s just like there’s a plurality saying that they think aesthetic value is objective. And I’m like, do they really think that? If there was another set of beings and they thought that the art that we liked, that they hated it and there were no other beings that liked it, that they’re just mistaken and in fact, this art that they hate is actually just, is the right art, the best art?

Yeah, what should one do? Should one think maybe we’re answering a different question and we’re misunderstanding one another or should one just be like, “Wow, these people thought about it as much as I have and they disagree”.

Did you shift your views a lot? Maybe in reaction to the results even when you thought they were wrong?

David Chalmers: My reaction to getting a surprising result like that was to think that probably they’re understanding the question in a way different from the way that I understand it.

We very deliberately did it with just very short labels, not with long expositions of whatever your option means. Just because that process is endless and contestable. So it may be that what people meant when they said that aesthetic value is objective is something different from what you and I meant.

Not that there’s some objective standards that would apply, for example, to aliens, but maybe there are standards that apply, say to more than one human being and that somehow some works of art can be better than others, at least relative to say, human being’s standards and one can make aesthetic mistakes.

It’s not totally up to an individual observer to have certain very general norms of aesthetic appreciation. So my sense is, and I’ve talked to a few people in aesthetics about this result, and typically debates about whether aesthetic values, objective or subjective, seem to have that form rather than the one that would be about the possibility of species with different aesthetic norms who might equally be getting things right.

So that was my reaction and that was an issue I was less familiar with than some others so maybe I updated on what philosophers mean by objective or subjective. In other cases, there is this general question as to whether we should defer to the results of this survey. If it turns out that most philosophers think that P, then why don’t you think P? I think very few philosophers have this reaction, and I certainly didn’t have it. It’s a field where disagreement is rife. These are all hard issues. Philosophers can get these things wrong. So I don’t think there was a lot of actual updating of first order views.

Robert Wiblin: But not the philosopher who’s saying that. Other philosophers of course can get it wrong.

David Chalmers: Oh, well I think philosophers at a certain level have a certain kind of humility. At a first order level, we give arguments for our views and make them as strong as we can and accept them maybe or even are confident in them. But then at a second order level, we step back and say, “Well, these issues are really hard and we may be getting things wrong”.

That’s something like the attitude I have in doing philosophy. I think I’ve got good arguments for my views and I’ll give them. But then, at a higher order level, you can’t help but step back and say, “There’s a good chance that I’m wrong”. I still think you should try and pursue those views as well and as robustly as you can. But it does mean that for practical purposes, just say, a life and death issue depending on this, you might want to factor in a good degree of humility into your actions.

Free will [0:13:37]

Arden Koehler: A lot of the survey responses were very mixed. There wasn’t even a majority. It was like split in thirds between yes, no or other. One place where I thought it could make sense to update in the direction of philosophers is the compatibilism question or the free will question. So there was a question about free will, compatibilism, libertarianism or no free will. And just in the wider world, it seems like a lot of people don’t recognize that first option and the fact that 59% of philosophers favor compatibilism. I thought that’s like maybe one of the few cases where it could make sense to read the survey and be like, “Oh, maybe I’ll update in that direction”.

David Chalmers: Yeah, I guess maybe interact with what you think about the first order–

Arden Koehler: It’s true, definitely.

David Chalmers: –First order question. I’m sympathetic, but it is the case that philosophers are generally much more sympathetic with compatibilism than non-philosophers. I think it’s crazy and I’m inclined to think this is the case where the philosophers have thought about it a bit more deeply than the non-philosophers, especially when you think, why do we care about free will? Because we care about moral responsibility, having some kind of genuine responsibility for your actions.

And once you think about this, you start to think, even if the Universe’s deterministic, is there still a distinction between being responsible for your actions and not, and it’s very easy to motivate the idea that even in a deterministic Universe, there could be that distinction.

So that leads you in the direction of, even in a deterministic Universe, there could be free will. Now someone interested in free will could say, well, that wasn’t what I cared about (moral responsibility). I cared about some other stronger things, like being able to fundamentally make a difference to the time course of the Universe in some very strong sense.

So I think your average philosopher’s gonna say, “Okay, well maybe that’s not compatible with determinism. There’s a very strong sense of free will, but that one actually turns out to be less worth caring about than this other one tied to moral responsibility”.

And I think at that point, wherein something like in a verbal dispute, which can happen in these cases where you have different people mean different things by free will and well, I think that diagnosis is more apt for some of these questions than for others, but like this is the case again where your average philosopher uses free will, well the thing tied to moral responsibility. Many people outside philosophy may still think, ah, why use free will for that? I want to use free will for this other thing. This ability to fundamentally go against the laws of nature. And then we just have a difference about which one is worth caring about.

Robert Wiblin: Yeah. I’m pretty drawn to compatibilism, so I was relieved to see this survey result definitely confirming that my view was correct. I think that probably compatibilism is right. At least like for a particular understanding of the question, but I suppose I don’t think that you can get more responsibility out of that, so I’m kind of maybe not drawn to it for that reason.

Is the reason that most people are going for compatibilism is that they want to bring back moral responsibility for things?

David Chalmers: It’s not so much that they want to bring it back. It’s just that they think that there is an intuitive difference between cases where we are morally responsible and cases where we are not morally responsible, even in a deterministic Universe.

Robert Wiblin: Yeah. I guess it just seems like if you’re not responsible for like the preferences that you have, then even though in a sense you like had the ability to predict and then create the outcome that you wanted, you’re not then culpable for having had the wrong preferences in my mind.

Arden Koehler: I think often the idea is that you can be responsible for who you are and it’s not because you created who you are freely or something like that. But that we just have to reunderstand what you can be responsible for. And one of the things is like being the person you are with the traits you have and the character you have and then everything that flows from that can be a responsibility.

Robert Wiblin: Yeah. It seems odd to me, but yeah.

David Chalmers: And even if you’re not responsible for your character, you might think you can be responsible for your actions. You can deny the principal that responsibilities for your actions requires responsibility for everything that led to your actions. So you think no one is morally responsible for anything?

Robert Wiblin: Fundamentally. I think we should punish people and reward them and so on. I just don’t think that it’s because of like deservingness or moral culpability.

David Chalmers: Oh, well. I think many people want to distinguish moral responsibility from desert or deservingness. I think some people think that to have this notion of desert that you really deserve certain things that would require some strong kind of free will, which might not exist in a deterministic Universe. But I think some people want to nevertheless say there’s a weaker notion of responsibility which can exist.

Robert Wiblin: Yeah. Okay. So, you can be causally responsible, I suppose, but I guess I don’t feel like that then necessarily creates like a motivation for retribution or punishment, like beyond the kinds of consequences that create the right incentives for people to produce good outcomes. Does that make sense?

David Chalmers: Yeah. But I think many people think they want to reconstruct a notion of moral responsibility that doesn’t go along with retribution and desert and you can have moral responsibility that doesn’t ground those things, but nonetheless grounds being, having certain attitudes towards their actions, where we say when they did the right thing and when they did the wrong thing.

Arden Koehler: Or like pride. Do you think that like it’s fundamentally inappropriate to feel proud of like a morally good action because–

Robert Wiblin: You’ll feel guilty and proud if it will produce good outcomes.

Arden Koehler: Right. So it’s not made appropriate by the character of the action. It’s just like instrumental.

Robert Wiblin: Yeah, definitely.

David Chalmers: There’s still some ways of producing a good outcome. We want to say you’re responsible for and you should feel guilty about and we should have certain attitudes and be treated a certain way. Other ways that it’s done is under the influence of a drug or something someone gave you. Then the attitudes of the person is inappropriate. Even if it’s in a deterministic Universe. I think one can draw those distinctions that end up being what we use to track moral responsibility.

Robert Wiblin: Yeah, it does. It’s interesting how people’s intuitions about the cases in which you should feel proud or not, or be punished or not seem to track so well. Like when the incentives actually will make society better. But anyway, we should probably move on because ultimately this wasn’t meant to be a section about compatibilism, specifically.

Survey correlations [0:20:06]

Robert Wiblin: I guess a really interesting finding from the survey, which Arden pointed out to me, is that there was like only very weak correlations between kind of the answers that philosophers were giving across different questions.

For example, if you are serving the general public and ask them their view on the death penalty, then that gives you like a remarkably good ability to then predict their view on climate change or their view on like international relations, which is in a sense, kind of surprising, but people are like lined up in these likeideological groupings where it’s like you can, yeah, the answers to one question predicts another. But the highest correlation coefficient in this survey was 0.56 which is kind of only moderate and it was between moral realism and cognitivism which obviously have a lot to do with one another directly.

Were you surprised by the kind of low correlations or the kind of low level kind of ideological, maybe consistency is not the right word, but ideological fervor between different camps within philosophy.

David Chalmers: I don’t know. Where I come from, 0.56 is a pretty high correlation coefficient. So I don’t know. Maybe it depends on the area.

Arden Koehler: Philosophers considers that a very high correlation?

David Chalmers: Well, in psychology it’s very rare you get something as high as that. You’re pretty happy to get 0.3. 0.56 corresponds to 80% agreement on the on-diagonal elements I think in that question.

You see, basically 80% of the realists are cognitivists and vice versa, and then maybe about 15% in the off-diagonal. So 15% of the realists are non cognitivists or something. So that’s one way of looking at it. It’s pretty high agreement. Given one person’s result you can break the other person’s result with 80% accuracy. But yeah, there are reasons to think it’ll be imperfect because all these questions can, uh, involve subtle distinctions and there are non cognitivists who nonetheless consider themselves realists and vice versa.

Robert Wiblin: Yeah. I guess in politics it seems that that brings out people’s tribal instincts, so they tend to group together for practical reasons, if not intellectual reasons, like kind of all sharing the same views or like wanting to fall into line and are particularly incentivized to do that. An interesting thing, I’ll provide a link to a study looking at how ideologically tightly grouped are people in politics, which found that uneducated people just like have views all over the place. Their views on one question don’t really predict their views on another.

So in a sense they’re like very ideologically flexible, whereas the more educated you get, and once you’ve like done a PhD, then you’re just like completely in one camp, and like all of your views line up very consistently, which I guess from one point of view could be viewed as a success because it means that they’ve like brought their views on different questions into line by seeing kind of common elements. On the other hand it could just be viewed as a social phenomenon where it’s like you’ve fallen in line with a social group and now you’ve just adopted all of their views.

David Chalmers: We did do a factor analysis and we found some very strong correlations and factors and we can eyeball the factors and try and give them labels. There was a realist factor for people who tend to think certain phenomena are real. There was a naturalist factor for people who wanted to reduce things. There’s an internalist and externalist factor tending to think the environment matters or what’s inside the system matters.

Those are very loose groupings, but we did find pretty strong correlations between clusters of five or six questions where the response to one question would predict pretty strongly response to others in the cluster.

Arden Koehler: For what it’s worth, to my mind, it’s like kind of a good thing, or I was pleased to see that 0.56 was the highest correlation because it seems like, at least for many of the questions that were on the survey, like there really wasn’t like that much logical connection between them. And so it does seem like it should be possible to hold different views on different ones. Was that part of how you designed the survey? Like each question was supposed to be like logically separate than the others.

David Chalmers: Yeah. At least if they were too closely related, that was a reason not to do it. For example, we had a question about Newcomb’s problem. Could you be a one box or two box person in Newcomb’s paradox?

Arden Koehler: Didn’t most people say like other or something?

David Chalmers: It was probably the most technical question on the survey, the one that required the most background knowledge. So maybe half the people said, “Ah, I haven’t thought about it enough”. Of the people who did it, I think it was about, it was a small majority for two boxes, saying maybe three to two or something for two boxes. But we thought about also asking a question about decision theory, causal or evidential. Then, well, what’s the point of asking that question because this is going to correlate so strongly with Newcomb’s problem. Basically two boxes are usually causal decision theorists and one box is usually evidential decision theorist and yeah, these can maybe come apart in some circumstances. But yeah, that question just didn’t really seem worth spending a whole extra one of our 30 questions on because it was so strongly correlated. Whereas moral cognitivism versus moral realism are distinct enough that we expect this from correlation. But it’s also been formative to find out how many people these come apart on maybe we got just enough information out of that.

Oh, we have a new version of the survey that we’re going to launch very soon because the first one was conducted in November 2009, so we’ve got to get on this quick because November 2019, which is next month, will be the 10 year anniversary.

For the new survey, we’re going to ask the same 30 questions again and see how answers to those have changed over 10 years, both individually and as a distribution, which will be interesting. And we also want to ask a whole bunch of new questions, maybe another 10 questions we’ll ask to everyone and then another 50 or 60 questions we’ll ask to some randomly selected subset of the population just to get more information about more questions. And for those, maybe you’d be less choosy about picking ones which are completely distinct from each other. So maybe there’ll be some even stronger correlations among these. If you have any questions you want to suggest for the survey, feel free.

Robert Wiblin: Yeah, shall have to think about that. Maybe listeners can email in if they’ve got a good one.

David Chalmers: Okay, great.

Robert Wiblin: Do you have any particular predictions about how things will go? I guess I predict that this is like a famous survey now, so you get a higher response rate because people would be like, finally, I get a chance to cast my vote for philosophical questions.

David Chalmers: Yeah, it is. I think it’s respectable now. Someone drew up a list of the most highly cited papers in philosophy over the last 10 years, and this one, the paper that David and I published was number one. Not because of any particular merit of any amazing insights in our paper, but because people just wanted to cite the results. Papers on contextualism say most philosophers believe this. Many papers on free will say this, et cetera.

So, yeah. So at the very least it’s a respectable thing. Hopefully we’ll get more respondents this time.

Robert Wiblin: Yeah, that’s a genius strategy, Dave. To get other people to say things and then associate yourself with that and then get lots of attention via them. If only I could find some way to do that in my own life.

David Chalmers: Yeah, I’m not sure one should really become known as the world’s leading cataloger of other philosopher’s views.

Robert Wiblin: Kind of citation farming. Yeah. So it raises an interesting question of why hasn’t this been done long ago? Cause it seems like you’ve got this whole field that’s trying to answer these questions.

Wouldn’t it be useful to know what they think about these questions? Like help us update our views to produce knowledge for the public to form opinions about all of these issues. Like it’s kind of a curious social phenomenon or professional phenomenon to me that this took until 2009 to happen.

David Chalmers: Yeah, that’s a good question. As far as I know they haven’t. The same question applies to many fields. Of course, it’d be great to know what most physicists think and most chemists think and so on. And my sense is that it has not happened in many fields. There are some little things I’ve seen here and there. Small groups of physicists, and I guess there’s something involving economists, but yeah, I don’t know as A), it’s logistically tricky, I guess, to do it and B), in the case of philosophy, there’s the extra thing that philosophers are meant to think independently. They pride themselves on this. They don’t defer. They think of themselves as not deferring to other philosophers much. We also know these are fields with lots of disagreements. So if you’re just trying to find the truth about a topic, I don’t think most philosophers think that asking a bunch of philosophers is the best response whereas this is actually a reason why you might’ve expected it to happen sooner in physics or chemistry or economics because there is somewhat more consensus in those fields. But maybe the thought is when there’s a consensus, it’s obvious what the consensus is so we don’t need to do the survey.

I do predict that if people do these surveys in many fields, there’ll be surprises. But the other thing was that David and I were just in a position to do it because we had set up this web service, PhilPapers, which most philosophers turned out to be users of and it wasn’t terribly hard to extend that to get a full database and a controlled population that we could survey. This year we’ll be in a position to go much wider still. Maybe just the internet makes these things a lot easier.

Robert Wiblin: Yeah. There is the Chicago Booth School. They do very regular surveys of economists on like policy issues, which I guess, I suppose it seems very decision relevant there, so it’s easy to maybe get funding to find that out.

David Chalmers: They don’t take surveys on theoretical questions?

Robert Wiblin: Much less, I think. Yeah. It’s more typical on minimum wage, yay or nay kind of thing. Yeah. At least like those are the ones that I’ve seen anyway.

David Chalmers: Every now and then, I see polls on physicists on like the correct interpretation of quantum mechanics.

I was just at a conference of physicists and philosophers where they actually took a survey at the end of the conference on many questions on the foundations of physics. But again, the groups are small. I think you’ve got to really try and really go for the biggest results.

Robert Wiblin: Yeah. there’s also the survey of AI researchers on when they expect different thresholds of competence in AI. Which we talked about on another episode a while ago with Katja Grace. Interestingly, it seemed like the conclusion with that from talking with her was that we don’t know and AI researchers themselves don’t really know because their answers are kind of inconsistent and are very spread out. So I think that is potentially a real finding, both from this survey about philosophers and the survey about AI researchers is just that there’s no consensus, which I think should then let people to be more agnostic and say, “Well, a lot of different things are possible and maybe we should hedge our bets a bit on all of these questions”.

David Chalmers: Yeah. Although again, there’s probably a selection effect to take surveys on questions where there’s no consensus, because a lot of the time when there’s a consensus, it’ll be fairly obvious and therefore not worth taking a survey about.

Robert Wiblin: Yeah, that makes sense. I guess that that helps explain why there’s not a survey of chemists on like, really live controversial questions in chemistry because presumably at any given moment it’s much more limited than amongst philosophers.

David Chalmers: Philosophy is basically selected to be the field where there’s a controversy and disagreement over questions because many fields started out as philosophy. Newton was a philosopher, but came up with some methods to settle these questions. And then once you got those methods, then there’s a certain degree of agreement. Then we spin it off and we call it physics. It’s no longer philosophy and this happens again and again.

So there’s some selection effect for philosophy basically to be a field where almost by definition, there’s disagreement over the key questions.

Robert Wiblin: Yeah. That leads nicely into in the next section on your paper about why hasn’t there been more progress in philosophy. But before, I wanted to comment: would this be good for people’s academic careers to be doing these surveys in different fields?

Cause it seems like, well you can get a lot of citations at least, and I suppose people will pay attention to you and like maybe know your name because he did this thing that they actually find really interesting to read about and blog about. Yeah. Is this something that people could potentially do that’s like both very useful, at least from my point of view, and also it’s good for their academic career?

David Chalmers: I think, yes, it’s useful. No, it’s not particularly good for your academic career. I don’t think that if I had done this just starting out and it’d been, the main thing I did, it certainly is not the kind of thing that particularly helps you get a job in a leading philosophy department.

It’s viewed as somehow statistical and something that doesn’t require great philosophical insight. So fortunately, I had a reputation already, and my co-author David has a reputation for working on other things in the philosophy of mind. So, is it a marginal benefit?

Yeah, it probably helped. It might’ve helped David on his tenure case to have such a highly cited paper, but I doubt that many people were, even then, probably when people are writing letters of evaluation and so on, it’s something that gets a few lines along the way, rather than people say, “Wow, this is an amazing service to the field.”

Or maybe they think it’s a good service to the field, but somehow it’s not something that brings you individual credit.

Robert Wiblin: Yeah. It’s somehow just like almost too crass. So too practical maybe for academia.

David Chalmers: It’s certainly true that when we took the survey, we got many people who responded by saying, this is a ridiculous thing to be doing and unphilosophical to be taking a survey of philosophers as if we should be deciding philosophical questions by democracy. That said, a lot of people loved it. But we did get quite a lot of negative reactions the first time around, which you try to answer by saying, well, there are various obvious reasons why this is important.

People do make sociological assumptions about what philosophers believe in writing their philosophy papers all the time. For example, they feel that they think they don’t need to defend certain assumptions. Because they think most philosophers already accept it and so on. Having better information about that should allow you to do better philosophy.

Arden Koehler: That actually suggests that philosophers do think the fact that a lot of philosophers believe something is a good reason to believe it if they–

Robert Wiblin: They’re going to use that as a–

David Chalmers: I think it’s more than just sociologically. They’re going to think that if enough people disagree with this premise, they’re going to reject my paper at the starting point. But for my own purposes, I need you to argue for this if I want to bring people along with me. But yeah, philosophers do think about this. Maybe this is not going to be important to them in figuring out what’s true, but it is going to be important to them in figuring out how to write a paper and a lot about what’s involved in writing a philosophy paper is of course not about figuring out what’s true, but convincing other people that it’s true.

Robert Wiblin: There is this interesting form like people trying to do experimental philosophy where they would take the premises that philosophers would claim that like every normal person would believe, and then like actually survey people to see if they agreed.

And they often found that just the typical person off the street would very often disagree with what philosophers thought was completely common sense. I wish I could think of a good example off the top of my head, but yeah. Yeah, I suppose I have learned from just looking at lots and lots of polling data on political questions and academic questions that it’s very hard to predict what a typical person or what, like what is the distribution of views because it’s like all of us has such a filtered perspective because we tend to associate with people who have the same common sense as us.

And so yeah. For example, if you look at the polling of the United States, it seems like immigration has never been more popular with the American public since polling began and trade has never been more popular, but it’s absolutely not probably the perception you would get by like reading the newspaper or just like guessing what it would be.

It’s constantly shows that, yeah, it produces these really surprising results.

David Chalmers: Yeah. Experimental philosophy is also building all this stuff cross-culturally, and it turns out that, well, initially it seemed to turn out that a whole lot of assumptions that Western philosophers were making about say, knowledge versus justified true belief, which might be rejected by people in different cultures.

That was about sort of 20 odd years ago. I think now the trend has been towards making the case that there’s actually more convergence between cultures and across cultures than people had thought before. But I actually just lately got interested in the question as to what extent intuitions about consciousness are shared across cultures and various people who have tried to make the case that some Western assumptions are not shared in other cultures.

I’m hoping that we might get some data from that soon. Of course, we can get limited data from polling philosophers on these questions. But to get a broader data, one would need somehow to poll other people in a way that we don’t have access to doing right now. But I’m hoping some experimental philosophers will start doing that.

Progress in philosophy [0:35:01]

Robert Wiblin: Yeah. Coming back to this paper you wrote on like why hasn’t there been more progress in philosophy. I think it was back in 2004 that you wrote that consensus in philosophy is as hard to obtain as it ever was and decisive arguments as are as rare as they ever were. And to me, this is the largest disappointment in the practice of philosophy. I guess, what can be said in defense of philosophy given that there hasn’t been convergence on most questions. There hasn’t been a convergence on the right answers to most of these questions.

David Chalmers: Well, the big obvious thing that can be said in defense of philosophy here is the thing that I said already. Which is philosophy by its nature is the field where there’s disagreement, because once we obtain methods for producing an agreement on questions and reasonably decisive ways, we spin it off and it’s no longer philosophy.

So from that perspective, philosophy has been this incredibly effective incubator of disciplines. Physics span out of philosophy. Psychology span out of philosophy. Arguably to some extent, economics and linguistics span out of philosophy. So what usually happens is not that we entirely solve a whole of a philosophical problem, but we come up with some methods of say, making progress experimentally or formally on a certain subquestion or aspects of that question and then that gets spun off. The part that we haven’t figured out how to think about well enough that remains philosophy.

Is that the philosophers fault? No, absolutely not. Look at all the great philosophers who successfully addressed those questions. It’s just the nature of the field. There is still the question why is it that on the questions that remain that they are as hard as they are? I’m sure certainly they’ve been selected for being hard so one shouldn’t be surprised that philosophical questions are subject to disagreement, but still, faced with any individual question, like say the mind-body problem, it’s like, damn, this is so hard, why don’t more people agree with me? Why is this so hard to come to grips with? And I think my own view is probably something about the difference in the character of the problems. It’s not the differences in the character of the field.

I don’t think it’s that. Philosophy, like every discipline has its pathologies, but I kind of suspect that if you sort of redid philosophy with a different population with somewhat different pathologies, you’d still find disagreement over the big questions of philosophy, which is subject to the biggest, most fundamental disagreement, like say, I don’t know physicalism versus non-physicalism about the mind or consequentialism versus non-consequentialism about ethics or deep differences in political philosophy.

I’m inclined to think that those were probably, those are, those are just disagreements that run deep. And it’s something about the nature of the questions that at least so far, we’re not in a position to compel agreement on them. So yeah. So on this way of looking at things, the problem is not exactly the problem of philosophers, which is not to say that it might not be something specific to our situation.

And in the future, with enough information and enough reasoning, enough new background, enough advances, these problems might eventually be solved.

Arden Koehler: This is maybe a little bit too big of a question or will take us a little bit off track, but so even though we haven’t converged very much on true answers to the big questions of philosophy of like how did the Universe begin and the mind-body problem, that kind of thing, we do make progress on little questions. And we also clarify questions a lot and we create new questions and we map out logical space and we figure out like sort of what’s really going on underneath. Apparent disagreements are resolved,verbal disagreements, all kinds of other stuff like that. So that also sort of feels like progress to me.

What do you think is the value of that kind of progress? Does it have independent value or is its value mostly derivative of it allowing us to do a better job of answering philosophical questions big or small.

David Chalmers: I think absolutely is progressive. The kind most of what philosophers typically call, or think of as progress in the field consists of this smaller kind of progress: making an important distinction, getting a new framework, finding a new argument for review, refuting some versions of a view. And so, I’m not like deciding the big question for once and for all, but getting new reasons on either side, carving up the landscape better, getting a better understanding. So I think all that is really important. I think it’s very conducive towards understanding. I think understanding is a virtue even if it’s not necessarily conducive towards first order knowledge.

Arden Koehler: Right. You at least understand what you don’t know.

David Chalmers: Yeah. I think understanding is genuinely important. A lot depends on what you see as the aim of philosophy. If it’s a practical aim. I’m not interested in philosophy primarily to improve the world. I’m interested in philosophy to actually ultimately understand reality. And then I guess, well, then all of this, by definition, understanding is going to be valuable even if it doesn’t produce first order knowledge.

I would like as part of this quest to understand reality, to know things about reality, not just to understand issues about consciousness, but to know a correct theory of consciousness and to fully understand and know the truth about the relationship between mind and body.

So I think that’s kind of an ultimate goal. But even if you fall short of that goal, there are these forms of understanding that don’t involve knowing the deep and ultimate truth that I think if you’re in philosophy to understand the world, I think those things at least feel as if they have a really significant intellectual value.

Now, how does that play into the question of the practical role of philosophy, I’m not sure. I’d like to think that our understanding faces some issues, and even if we don’t totally resolve the issue, say between the truth of consequentialist and nonconsequentialist theories in ethics, nonetheless, understanding those issues and the considerations on either side and the different varieties that work better than others is still going to be a whole lot of practical use in making the world a better place and so on. So yeah, maybe the smaller questions that you’re raising nonetheless can play sort of at least some fraction of the practical roles would have if we actually knew the definitive answers to the big questions.

Robert Wiblin: Yeah. You mentioned earlier that you don’t think that pathologies of philosophy are worse than in others. I guess I want to push back on that a little bit because it seems like to some extent, part of the goal of philosophers or like one way to succeed as a philosopher is just to carve out some new position that someone else hasn’t taken and because it’s like there’s only so many positions out there.

At some point people just end up pushing into more and more ridiculous views in order to have something new to say in order to get an academic job. I’m reminded of a sort of like someone who was starting out their PhD and both they and their supervisor kind of agreed on this philosophical view and the students says, “Could I do my thesis on explaining why this view is correct?”

And the supervisor said, “No. There’s no point in writing the defense of the correct view. That’s like a pointless move.” It’s like from an academic career point of view, you’ve got to find something new and different to say. I guess I wonder whether that like creates a perverse incentive for philosophers to just like spread out all across the board to have many different views in order to make sure that they can justify having their jobs.

David Chalmers: Yeah. There may be something to that. Philosophy does have its pathologies. Certainly. I don’t think I said they’re no worse than in other fields. I think every field has its pathologies. Philosophy may be open to having more because we’re not as constrained, say by experiments and formal methods and so on. The things that pin things down more in other fields. That said, I think every academic field I’ve gotten to know well, which is quite a few by now, has got very, very serious pathologies. And I’m not saying philosophers are any better and they could well be worse.

The one that you mentioned, I think it happens, there are certain kinds of reward for interesting disagreement in philosophy, not disagreement alone, but if you can, there’s certainly rewards for novelty as in all academic fields.

The same goes on in psychology. You come up with results confirming what people thought would be the case, then it’s very hard to get them published. You come up with things disconfirming, then that’s much easier to get them published and–

Robert Wiblin: But then exactly, we do see there, it’s like–

David Chalmers: Massive biases.

Robert Wiblin: It seems highly corrupted. Yeah. And it just leads to like very bad ideas getting promoted and–

Arden Koehler: Just to defend philosophy for a second. You could think that maybe one of the reasons that having a discipline of philosophy is useful is making sure that people are sort of like checking all of these strange seeming views and like coming up with new views.

Maybe most of them are definitely, most of them can’t be right. But then they might like stumble upon something that really is right or can give us a deeper understanding of something and that it really is useful to have that pressure toward novelty. I’m not sure this is justifies it in like the case of the story that you told, but like–

Robert Wiblin: Well, yeah. I think that that is like a very good justification for having philosophy as a field. Like, yeah, exploring the space of ideas that we currently think are wacky, but then it means that there’s no mystery why it is that philosophers have very widely different views cause like we don’t allow them to get the job unless they do.

Arden Koehler: This is just pushing back on the idea that there might be a pathology of philosophy. Maybe it’s actually a feature.

David Chalmers: Yeah. I think there are various reasons why it’s good to have different views be explored and understood. It’s also true in science, of course. You want to have, you want to make sure the views are not being overlooked and it’s good for the field to have individuals who are pursuing all kinds of different views, even if the science of the field as a whole comes to a collective judgment.

But just a couple of things. I think this does happen. In philosophy. I think the reverse also happens as in science. People can be rewarded for sticking to known paradigms and for extending them in certain directions. There are many, many supervisors who are strictly happy when their students work on extending their views.

But I think the question you’re really trying to raise here is whether all that disagreement that we find on big philosophical questions is somehow explained by this effect. I’d be extremely surprised if that’s the case. I think it may well make a difference to the numbers and many things make a difference.

But if the idea is well without this particular pathology, then we might’ve actually had convergence on a certain view of normative ethics or a certain view of the mind-body problem. I really find that extremely implausible. I think it’s something about the questions here that just the evidence is not really in. There were strong considerations in favor of this. My expectation is you could rerun philosophy with many different psychologies and many different pathologies and it’s just that there were these kind of incommensurable considerations in both directions.

It’s certainly true. There are subcultures that converge on some of these things. I think that’s actually a way of making philosophical progress is through having subcultures that share certain assumptions. So yeah, maybe most effective altruists say are consequentialists of a certain kind.

And one way to make philosophical progress is to make those assumptions go ahead. To push things ahead in a way which is harder if you didn’t share those assumptions. But if you then come back and say, “Ah, it’s just a pathology, say all of academia, that everyone is not a consequentialist”. I think that’s just an overly optimistic view of the intellectual territory here. I think the reasons to worry about consequentialism are just very, very strong reasons. And it almost, anyway, I can see it re-running philosophy. There’s going to be a very big body of people who reject it, which is not to say that one view isn’t right. But that just to say the reasons run deep.

Robert Wiblin: Yeah. Another reason is just that like, people, it’s so easy to deny the premises of arguments that people make or even sometimes to deny the inferences even when they seem like pretty strong.

Why is it that it’s like, it seems easier just to deny the strength of arguments in philosophy than in other fields? I guess with experimental fields, it seems like it’s more obvious why it’s a bit different, but it seems like it’s completely different from mathematics as well, which is also like dealing in the realm purely of ideas.

Yeah. In math-like arguments, there’s usually much more agreement on whether they go through or don’t. But in philosophy, people just completely regularly deny arguments other people find incredibly compelling.

David Chalmers: Yeah. Well, I think that basically comes down to having these certain standard methods both in mathematics and in the sciences.

The method of proof in mathematics gives people a consensus framework. You’ve got a consensus on when, on what counts as a good proof and a consensus on when something is proved. And likewise in the sciences, you’ve got the experimental method with reasonable consensus on what the method is and what counts as establishing a result.

Of course in the sciences it’s a lot more blurry and there is room for a lot of disagreement in specific cases. But we’ve basically got. agreement on a broad method, which can basically serve to at least tentatively established results in science and to definitively establish them in mathematics.

I just don’t think there’s any analog to that in philosophy. In philosophy, we do have this method of argument, but the argument all have to start from certain premises and those premises are all questionable. You might say, well in principle someone could question the premises of a mathematical argument by say, questioning an axiom and so on. But —

Arden Koehler: They do do that, just not very often.

Robert Wiblin: Or then you end up with a different branch of mathematics or something. Or people are discussing a different set of objects.

David Chalmers: Yeah. And mathematicians are happy to back up and say, “Okay, well if you insist on doing that for our purposes, mathematics is just what follows from these axioms”. They might question the logic but that looks even more eccentric. So just as a matter of fact, there are certain starting points that seem to compel sufficient agreement that they can serve as the foundation. I feel like mathematics and science can do this, and there are some that don’t. There are some areas where there are starting points that seem plausible to some people, but not to other people.

And then you could still do this in the field of philosophy, which is why I think it is actually important in philosophy instead of if we’re speaking about pathologies in philosophy, one pathology is we spend endless time debating those foundational assumptions that we disagree about and less time exploring the consequences, which is one reason why I think it’s actually very good to have subcultures. I’d say, yeah, maybe the effective altruism is an instance of this, a subculture or a people that are interested in AI safety or something where people make certain assumptions which might not be shared by philosophy as a field but nonetheless, go ahead and see what follows.

There’s still going to be the question of bringing it back to the field. In many of these cases, you might find disagreement about various foundational assumptions among the field as a whole, but if the project is important too, and that is, I think, as a matter of fact, how many fields end up getting spun off out of philosophy by subcultures pursuing their programs.

So if I were to see any thinking about reforms to philosophy, I’d like to see a bit more reward for people making certain assumptions and seeing where they go with them. Whereas right now, yeah that work can be rewarded, but I think it often looks a little bit eccentric to philosophers, especially those who don’t share those assumptions that afield. Maybe there’s more reward for debating the foundational concepts.

Arden Koehler: Maybe because people have more strong views on them or something, the foundations.

Robert Wiblin: Or it feels more fundamental. I guess another reform that people sometimes suggest is that philosophers should spend more time hanging out with natural scientists and I guess also maybe vice versa.

Physicists would do well to hang out with philosophers. I guess as someone who’s actually done that, I guess hanging out with neuroscientists and people who are thinking about like how the mind works, do you think that that actually would help or is that just kind of a bit of a platitude?

David Chalmers: I’m very skeptical that would make much difference for the primary reason it’s happened a lot already.

Robert Wiblin: And yet not everyone agrees, still.

David Chalmers: Knowing the science, hanging out with the scientists… It’s led to, I think, a lot of interesting progress in philosophy becoming richer and better informed.

But has it led to much convergence on those deeper questions? No, absolutely not. And very frequently, the scientists disagree as much on those big foundational questions as the philosophers do. They say, “Oh, that’s a matter for the philosophers. It’s above my pay grade.” Or they’ve got very strong opinions, but they go in different directions.

So physicists disagree about the foundations of quantum mechanics. As much as philosophers do. Psychologists and neuroscientists, if you poke them, disagree about as much about the mind-body problem as philosophers do. And so one, I think it’s very good for the field to be empirically informed, and a lot of the time empirical information is very relevant to these questions, but typically doesn’t lead to convergence.

And one reason is that the philosophical questions, by their nature, have almost become these ones which are not so easy to empirically resolve, but typically you get an empirical premise from the sciences towards one of these philosophical questions. Some people say something about neuroscience and therefore, consciousness is physical. Well okay, it turns out that to make the step from neuroscience to the philosophical conclusion you actually need a big, strong philosophical premise to link the two and that one ends up being just about as strong much of the time as what is needed.

So every now and then there’s something coming from the sciences that might refute a previous philosophical view. On case that comes close to doing that. One of the better cases maybe is relativity theory, which many people take to very strongly undermine the philosophical view, known as presentism: all the things which are real are what exists in the present.

But relativity theory says there’s no facts about absolute simultaneity that could make it the case that there is a distinguished present in the whole Universe that makes it much harder to be a present system. There are ways for the presents to survive. So there are cases where this happens.

Maybe Godel’s theorem helps to undermine a certain view known as formalism about mathematics. Where to be true is to be provable. Godel seemed to make a pretty good case that there’s unprovable truth. So every now and then it happens that science can lead to definitive progress on a philosophical question. But I think we find just a lot of the time that science will enrich the discussion of a philosophical question without really decisively settling it one way or another.

Simulations [0:51:30]

Arden Koehler: We actually want to talk about the idea that we might be living in a simulation and what, if anything, that might imply, so a lot of people think that this is not a very serious topic or very silly, or at least that not very much that’s useful can be said about it.

Why do you think it’s nonetheless worth talking about?

David Chalmers: I think it’s worth talking about in a number of ways. I got into this through thinking about some big traditional philosophical questions like how do we know about the external world? Is it possible that we could be in what Descartes called the Evil Demon scenario where a demon is trying to fool you into thinking that an external world exists when none of it is real. The modern version of that is the simulation hypothesis. How do you know that you’re not in a simulation? And many people use that to cast out on the kind of knowledge we have of the external world. Now that’s very central to traditional ways of thinking about these scenarios, that somehow if we’re in simulation, nothing is real. If the world is a giant simulation, then things like this glass and the computer I’m using and microphone and the world outside my window, none of them are real, just fake. It’s a fictional reality and that gets you to the line where if we don’t know we’re in a simulation, we can’t know anything at all. I’m actually inclined to think all that is based on a false presupposition. I think that if we’re in a simulation, things are still perfectly real.

If I’m in a simulation, this glass is real and so on. It’s just that if we’re in a simulation, we’re living in a digital world, a world made of, let’s say, bits. So we shouldn’t say, “None of this exists”. We should say “If we’re in a simulation, it has a different nature”, and I think that’s interesting for a number of reasons, if that’s right.

First it suggest that some of those traditional arguments for skepticism about the external world are much too quick. Yeah. Well, even if we don’t know we’re in simulation, maybe we can know an awful lot about the external world based on, for example, considerations about it structures.

So it’s philosophically interesting. Maybe it offers us some insights into the character of reality. You don’t need to believe that we actually are in a simulation to think there’s interesting conclusions to be drawn from reasoning about what happens. If we’re in a simulation that tells us something about our grasp on reality and therefore the relationship between the mind and the world.

But I think it’s also interesting to think about. There are these practically important questions, which approach as we begin to spend more and more time in virtual realities and in simulated worlds. Are we engaging in a virtual reality or are we fundamentally engaging in a fiction? Is it a form of escapism where none of this is genuinely real, or are we in fact living a meaningful, can we live a meaningful, substantive life in a virtual reality in interacting with real objects and real people, which can have the kind of value that a genuine life has? I’m inclined to think that yes, we can. Thinking very seriously about simulations and virtual reality, you can actually shed some light on those questions about practical technology.

Arden Koehler: So it seems like there’s at least two kinds of like situations that we might have in mind when we talk about living in a simulation. One is like this really global skepticism that maybe everything is a simulation and we don’t really know it. And then the other is simulations that we build that we might someday enter, or even local simulations that already exists, like video games and stuff.

And just to make clear to the audience why the first thing is even worth talking about. I think. It’s inspired partly, like you said, by these traditional philosophical, skeptical hypotheses, like maybe everything is a dream, but also there’s this simulation argument by Nick Bostrom, which I know you’re familiar with, but just in case for any listeners that aren’t, I thought it’d be useful to go through really quickly.

So Bostrom’s argument is roughly that if it’s possible to make simulations and people want to make simulations someday, and both of those things seem like pretty plausible, then we’ll make it just an enormous number of them. And if that’s true, then most beings throughout all of history will be simulated beings and if that’s true and we have no reason to think that we couldn’t be in a simulation, then we are overwhelmingly likely to be simulated beings. So that’s just to show why some people think this is really worth taking seriously as something that might really be the case, even if it’s also philosophically interesting to think about, even if we don’t think it’s the case.

So I was just wondering whether you had anything to say on the simulation argument, anything to add, whether you think it’s a good argument or any ways that we might get evidence as to whether we in fact are in a simulation or not.

Robert Wiblin: What’s the probability Dave? Are we in a simulation or not?

David Chalmers: I’m sympathetic with Bostrom’s argument in the sense that I think it’s at least worth taking seriously. If I had to bet on the odds of we’re in a simulation, I don’t know. It’s probably some somewhere between 0.01 and 0.99. Sorry. If I really had to go from my gut. Maybe 10 to 20% but who’s to say. In most of my work, I’ve thought a lot about the simulation hypothesis, the hypothesis that we’re in a simulation and what follows.

I haven’t done that much on Bostrom’s argument that we’re in a simulation. But I think it’s clearly an argument worth taking very seriously. And I’m inclined to think that some version of it probably works, at least if you’re clear enough about what the assumptions are and about what the possibilities are.

Bostrom makes it an argument. It’s not actually an argument that we’re in a simulation. It’s an argument that either we’re probably in a simulation or most populations never end up producing simulations for one reason or another, and those reasons are themselves very interesting. There’s presuppositions for the argument that building simulated worlds of a certain kind is possible and that consciousness and simulations are possible, but I’m inclined to think that some version of the argument work. In the book I’m writing I have a fairly extended analysis of the argument. There’s a few points where I differ from Bostrom. For example, he makes a turn very heavily on the idea that people are running ancestor simulations. Simulations which are indistinguishable from our own history. The various reasons Bostrom’s version works best that way because then it suddenly becomes possible for you that you are in that various simulations that you were constructing.

I’m not sure the argument with the version with ancestors simulations work so well because it’s very far from clearly possible to me that people will be capable of constructing perfect ancestor simulations that duplicate our history exactly. Maybe we just don’t have the right access to the facts about our history to do that. So I would prefer to give to a constructed more general version of the argument that turns on the capacity to build simulated worlds in general, grounded and simulated worlds that are exactly like ours.

I think then to make that run, the reasoning is then going to have to look somewhat different from the way that Bostrom makes it run in his argument and then there it turns out to be some different issues that arise. But I think nonetheless, an argument in the same style can still go through. Bostrom’s argument, if it worked with ancestor simulations, would say, “There are going to be all of these people indistinguishable from me and most of them are simulated and therefore I’m probably simulated”. Whereas a more general version will just say, “There are going to be all these people who’re kind of like me in some general respects, most of whom are simulated. Therefore, I’m probably simulated.” I think it’s a different style of argument, but the general framing and for many purposes, I think the upshot is maybe similar.

Arden Koehler: In terms of the upshot. So let’s say we are living in a simulation. When I say that I’m making a sort of a metaphysical claim. Basically, some people seem to have the intuition that this is a meaningless claim or like, well, I’m living in a simulation, it doesn’t really mean anything because nothing would be different on the ground level or something like that. Are you at all sympathetic to that? Do you think that’s wrong? Do you think, what do you think about that claim?

David Chalmers: Yeah, I think the simulation hypothesis is a perfectly meaningful hypothesis about the world. There is a long tradition in philosophy of saying claims like this might be meaningless if, for example, they’re untestable or unfalsifiable. To some extent, you might say the simulation hypothesis is potentially testable or maybe the stimulators could reveal evidence that we’re in a simulation, like show us the source code for the world.

They could move around planets like could break all kinds of laws of nature. They could offer us a red pill and we get to see the simulation from the outside. So arguably we could get evidence that we’re in a simulation. But then all we need to do then is to move to the perfect simulation hypothesis.

The hypothesis that we’re in a perfect simulation that completely simulates a world like ours such that we’ll never get positive evidence that we’re in a simulation. And now the proponents say, testability might say that hypothesis is meaningless. I’m entitled to think even if that hypothesis is untestable and unfalsifiable, it’s still perfectly meaningful.

And the best way to make that case, is to note that we can in fact, in principle, create beings in simulations. Right now, the simulations that we can create are very simple, but it looks like there’s no obvious obstacle, in principle, to creating whole Universe level simulations, including beings whose conscious experiences are determined by those simulations and their conscious experiences will be indistinguishable in principle from those of people who are outside the simulation.

And once we do that, then that being will be in precisely the situation we talked about. Their Universe will be a simulated Universe, even though they’re not, they will not be in a position to test this for sure. Now, there may then be simulated beings who are going to say, “Ah, the simulation hypothesis is meaningless. There’s no way to test it.” But we’ll be here looking down at them saying, “Ah, but you are in a simulation” and there’s other people who are saying, “Hey, maybe we’re in a simulation.” They are in fact correct. And the people who were saying, “No, you are not in the simulation.” They are incorrect.

So taking a bird’s eye perspective on the situation, I think we can tell that it’s a meaningful hypothesis. The people who say, in that case, “We’re in a simulation”, are correct and others are incorrect. And then, all we need to do now is to undergo a perspective shift and say, “Well, maybe that situation could be our situation.”

And then I think it’s very hard to resist the thesis that it’s at least a meaningful hypothesis. Maybe it’s not a scientific hypothesis at that point because you think science requires testability or falsifiability, but I think, yeah, there are a lot of meaningful hypotheses that are not scientific hypotheses about the nature of our world.

If you want to call it a philosophical hypothesis, then fine, there is of course then the added wrinkle that for many versions of the simulation hypothesis, it could actually be something we could get some evidence for. Actually, it’s very hard to see how you get definitive evidence against the simulation hypothesis.

So yeah. There is this question as to whether any version of it is truly well, the general version of the simulation hypothesis is truly falsifiable. It’s easier to see how you get evidence for it than against it. And the general worry here is that any evidence you might get that we’re not in a simulation, looks like any such evidence could be simulated by good enough simulators. So it’s hard to see how any evidence could constitute definitive evidence that we’re not in a simulation. So you might say that’s grist for the mill, that this is not a fully scientific hypothesis because it’s not falsifiable. But I think nonetheless, it’s very clearly meaningful for the reasons I was saying that we could have meaningful hypotheses that go beyond what’s scientifically testable.

Arden Koehler: Yeah. So I guess one way that the simulation hypothesis could be meaningful is if everything was, if it meant that everything was fake, or like if it meant that “Oh, we don’t live in a real world”. But you think that’s not true. You think if we live in a simulation, it’s not the case that everything is fake.

Do you think though that we could conclude anything else from the fact that we live in a simulation? Anything else that’s philosophical or practical or quasi-religious which I know sort of comes up in various places.

David Chalmers: Yeah. This is sort of what I’ve been most interested in and thinking about the simulation hypothesis, not so much the question of whether we are in a simulation, but what follows if we’re in a simulation. And the traditional attitude is, “If we’re in a simulation then nothing is real and everything is fake. Most of our experience is an illusion.” and I’ve tried to argue in response to that in that maybe that’s wrong. If I’m in a simulation, all the objects around me are still real and they still exist. But, one interesting thing I think would follow as a conclusion about the metaphysics of our world.

I’ve tried to make the case that the simulation hypothesis should best be seen as a hypothesis about what things in our world are made of at a relatively fundamental level. And I have suggested there’s an interesting connection here to what people call the it from bit hypothesis.

And in physics, in the foundations of physics, the idea is that everything is made of information at an underlying level. So I think if we’re in a simulation, there really are objects like chairs and tables. They’re made of molecules which are made of atoms, which are made of quarks, which are at some level made of bits.

There’s a level of bits, an algorithmic level underlying the familiar levels of physics. This is the version of what sometimes gets called the it from bit hypothesis. It’s not that the chair isn’t real, it’s just that the chair has made at some level of information or of bits that may turn out that in the next level up in the next Universe, those bits are realized by something else.

So then you get something like the it from bit from it hypothesis and maybe the levels chain further still. But I think you get an interesting metaphysics of information out of the simulation hypothesis. With respect to religion. Yeah, this is another interesting consequence. If we live in a simulation then it seems well, under some ways of understanding by the very definition of a simulation, if we’re in a simulation, there’s a simulator. There’s someone who set up the simulation, and that being can be viewed as a creator of our Universe, responsible for making this Universe come into existence.

So that’s at least a creator of our simulator. Furthermore, this creator may have properties like being all powerful in many cases with respect to our simulation. All knowing with respect to our simulation. So you’re getting a few of the properties of a traditional God.

Arden Koehler: You pointed out at some point that if the simulator was really all knowing, then like if they were able to predict what was going to happen because they knew the future, then it’d be like, why would they make that simulation? Like maybe–

David Chalmers: Yeah total omniscience would kind of undermine the point of it, except maybe as entertainment.

We do watch TV shows twice, but maybe it probably works better when now God knows a lot. There are many simulations where the simulator is not necessarily all powerful with respect to us, but then they know a lot. They’re very powerful. They’re probably not going to be all good.

There’s no particular reason to think that simulators are going to be all good, and they’re also not going to be the creator of the whole Universe. They’re not going to be a cosmic God. They’ll merely be a local god. So I’d say yeah, they’re halfway to being godlike on various dimensions, which is interesting.

So I’ve actually, in the book, I write the case that we should regard the simulation hypothesis as equivalent to what I call the “it from bit creation hypothesis”. The idea that our Universe was created by somehow arranging bits the right way. God started the Universe by saying, “Let there be bits so arranged”.

Should one erect a religion on this? No, I don’t think so. Because I don’t think anything about this indicates that this creator is in any way worthy of worship. It could be just be another hacker in the next Universe up.

But it has had the effect of making me at least a little bit more sympathetic to the possibility that our Universe might’ve been created. A possibility I was not terribly sympathetic with before.

Robert Wiblin: Maybe if they’re not worthy of worship, there are at least worthy of groveling and asking for favors and things like that.

David Chalmers: You want to at least get them to treat you well.

Robert Wiblin: Exactly. Yeah. Get on their good side. I guess one implication that some people have suggested that if you are really bought into the idea that we’re in a simulation is that it could change our expectations about what kinds of things we’re going to observe.

Because you can at least have some probabilistic reasoning about why they would be simulating things and like what sorts of things they would be wanting to simulate. In particular, you might think they’re more likely to simulate kind of interesting times in history, just as a kind of, we have a lot of crime procedural stories, but like not a lot of like hour long TV shows where people just like to sit at their desk doing work and like not doing anything interesting.

So I’d say they’re interested in like stimulating times that are perhaps particularly unpredictable or like have important consequences in the long term, either for entertainment or research purposes. So if we thought we’re probably in a simulation maybe we should expect to see really big events in our lifetime with a greater probability than we did before. Do you buy that?

David Chalmers: Yeah, all this requires a whole lot of speculation about the motives of simulators in building simulations, which I think is probably extremely difficult for us to do so I don’t put too much credence in speculation of that kind. But I guess certainly entertainment is one possible reason for building a simulation, but you might think that that’s only gonna require a relatively limited number of simulations.

After all, people tend to read one book or watch one movie at a time. Now our superintelligent successors, maybe they want to watch for all the possible movies simultaneously. I don’t know. But I guess I’d be inclined to think at least modeling the simulators on us, it’s quite likely that the great majority of simulations could be something like simulations for scientific or research purposes. Why? Because when you do things for scientific or research purposes, you don’t just make one at a time. Then you’ve got to worry about the replicability crisis.

We’ve got to make n as high as possible. So, I think people are going to be running for research purposes. They’ll be running a million Universe simulations overnight and seeing what happens and statistically maybe it’s going to be overwhelmingly likely where we’re in of those simulations where actually nobody’s paying any attention to the simulation much while it’s going on. They’re just coming back and gathering statistics in the morning for that purpose. It may not be particularly important that it’d be an entertaining or interesting simulation. When people want to do historical simulations too. For example, what happens if we rerun the election of 2016 a million times over and see what happens and maybe people will sometimes tweak the parameters just to let it run with an outrageous counterfactual event. Like let’s suppose Trump won the election and see what happens there, so maybe that could be some statistical bias in favor of occasional outrageous things happening for historical purposes.

Arden Koehler: Yeah. Do you think that… What’s the most educational thing? It’s normalcy, right?

Robert Wiblin: So maybe they want representativeness.

Arden Koehler: So like you might think that then it will be likely that normal things will happen because that really tells them what life is like.

Robert Wiblin: But I guess like in as much there’s not a lot of… What’s the term for this? When we were hunter gatherers or something, and we’re all just like hunting bison and eating them and so on. And there’s just, there’s not a lot of different ways that it can play out. And so it’s like, you run a hundred of them and you’re like, “Wow, this is the same every time”.

So it’s like doing things where it’s like there’s not a lot of randomness in the outcome or like you can’t get important, like, yeah. Flying off in different directions in history, then that seems like a smaller sample might do.

David Chalmers: Maybe they’ll want to run some mild counterfactuals too. They’ll simulate worlds roughly as it is and if you’re doing a historical simulation, you might want to, I think historians are very interested in counterfactuals. But often they’re interested in a relatively mild counterfactuals, “What would have happened if Hitler had not tried to invade the Soviet union?” and so on.

Arden Koehler: What’s a more dramatic counterfactual?

David Chalmers: More dramatic is what if a total weirdo could win the presidential election. Yeah.

Arden Koehler: I thought you were thinking about the laws of nature or something.

David Chalmers: Yeah. But running counterfactual laws of nature is a very natural thing to want to do. Physicists will be running simulations all the time of different laws of nature. They set up these laws of nature and see, okay, what happens? Does the Big Bang lead to a Big Crunch? Biologists will be running how many times does life develop? If you tweaked the parameters, how do you get to life?

How do you get to intelligence? It’s very easy to see scientists running all kinds of variations on laws of nature just for research purposes.

Robert Wiblin: Another implication that some people have drawn, which I guess is potentially more decision relevant, is that we might expect that the Universe is going to last less long.

So it’s like if we’re currently like in the fundamental like the real world and not in a simulation, then there’s every reason to think that the Universe is going to continue to play out for billions and billions and trillions of years in the future. So we have a lot of time to play with. But if you think that you’re in some kind of research simulation, it seems like there’s some decent chance that it will be shut down before we reach like a billion years into the future.

Which might give someone like a bit more reason for urgency or you might even think it might possibly be shut down in a hundred years because they’ll have figured out the thing that they wanted to learn about the 21st century and so we’ll be done. And this gives people a reason to like try to do more to improve the world.

Right now, rather than to think about these very long time scales. Do you think that’s kind of a sound inference to draw?

David Chalmers: Again, it all turns on this massive speculation about the motives of simulators. And yeah, there could be so many shutdown conditions. There is of course, the one shut down condition, which is, “Ah, shut things down when they figure out they might be in a simulation.”

So this way of thinking about it, okay, stop talking about this now. But I don’t know. I think there are so many possible termination conditions that I’m not sure I’d get particularly worried about it happening in the next hundred years. There is the Doomsday style argument that in general, whether we’re in a simulation or not, we should think that we are very typical beings. So possibly the Universe will end soon. That would also apply if we’re in a simulation. How likely is it that we’d be like this so early on in the simulation if it goes on forever and ever and ever, and you might want to update on that towards the world ending soon. But I think that applies equally whether you’re in a simulation or not.

Robert Wiblin: Yeah. The Doomsday Hypothesis is a huge can of worms, so I’ll provide a link to the paper for listeners who want to learn more about that one.

David Chalmers: I’m not endorsing it.

The problem of consciousness [1:13:01]

Arden Koehler: Let’s turn now to the thing that you’re most famous for talking about, the nature of consciousness. So we’d like to focus on the implications for practical ethics of ideas in philosophy of mind and uncertainties surrounding these ideas. But first we want to make sure that we, and all of our listeners are on the same page about what we’re discussing and when we talk about consciousness, because the word can mean a lot of different things to different people.

So when you talk about consciousness, what are you talking about and how is it related to intelligence and self-consciousness and how is it not related?

David Chalmers: Yeah, so people mean a lot of things by consciousness, but what I mean is roughly the subjective experience of the mind and the world. Roughly how it is from the first person point of view to think, to feel and so on.

My colleague, Tom Nagel wrote this wonderful paper called, “What is it like to be a bat?”. It said, “We don’t know what it’s like to be a bat, probably there is something regarding what it feels like to be a bat”, but anyway, whatever it’s like to be a bat, that’s the bat’s consciousness. It’s how things are, or how things feel from the first person perspective of the bat.

You look at a brain and you’ll see it processes information in various ways. It responds to stimuli, processing information leads to a behavioral response. That’s how a brain looks objectively. But there’s also how it is subjectively. I’m seeing you and having a visual experience with certain images in my mind, I’ve got certain sounds.

I might be experiencing thoughts. So consciousnesses is basically this stream of first person experience. Philosophers, to distinguish this from other kinds of consciousness, philosophers use the word phenomenal consciousness often to distinguish this from say, access consciousness, which is a matter of objectively having access to some information. Self-consciousness, which you mentioned, is about being conscious of yourself.

I think that is one aspect of phenomenal consciousness. Broadly, we have this sense of being conscious of ourselves, but that’s just one very specific aspect of consciousness. We’re conscious of things in the world. When I look at an object and I see a red square, that’s just vision, that’s perception; I’m conscious of the object, but that has a subjective experiential quality to me. So consciousness is much more than just consciousness of the self. You asked about intelligence, and I think about intelligence as, roughly speaking, a measure of behavior, of functional capacity, of your ability to do certain things, to solve certain problems, to achieve your ends by taking appropriate means and so on. And I mean, intelligence itself is complicated, but I think of that as very much on the objective and behavioral side, whereas consciousness is very much on the subjective side. So maybe you could have a system which is really quite intelligent but has no subjective experience at all.

And likewise, there may be systems with subjective experience that are not terribly intelligent. For example, one is basically subjective, the other is objective.

Arden Koehler: So the same thing for self-consciousness in your view? Like you could have something that was phenomenally conscious that wasn’t self conscious or maybe something that was self-conscious but not phenomenally conscious.

Maybe we wouldn’t use that term in that case. It has a model of

itself, but isn’t phenomenally conscious.

David Chalmers: Yeah. Self-consciousness itself kind of decomposes into… There’s phenomenal self consciousness which is being phenomenally conscious of yourself, having an experience of yourself and that can happen. That’s one aspect of phenomenal consciousness.

But then you can have a system which is conscious of itself in a non experiential sense, maybe, which has access to information about itself and can report information about itself. You have AI systems that can monitor their own states and talk about them. You might think of that as a form of self consciousness, but it’s not phenomenal self-consciousness. That would be on the objective side of self consciousness. One could have that kind of self consciousness in principle without being phenomenally conscious. And likewise, I think you could probably be phenomenally conscious without having any, actually it’s arguable, whether you could be phenomenally conscious without having any kind of self consciousness.

But generally there are at least states of phenomenal consciousness that don’t seem to have terribly much to do with being conscious of oneself. Like when you’re conscious of the people around you and of the world and of a problem you’re thinking about.

Robert Wiblin: So you’re famous for drawing attention to what you call the “Hard problem of consciousness”, which is this question, “Why does it feel like anything to be a person, or why does it feel like anything to be anything?” It does seem like we could just be going around like robots, taking all of the actions that we’re taking, but have no first-person perspective. Like it would feel like nothing to eat an apple. But I guess there’s a lot of people who kind of want to deny that there is a hard problem here.

That there is anything to explain. That there is anything mysterious about there being a first-person perspective. I’m sure that many of them are among listeners potentially. It seems like there’s a bit of a stream of this among rationalists and I often find natural scientists, I just can’t get them to accept that there’s like anything strange about consciousness existing. Have you found any way of getting through to people who are inclined to deny that there’s anything interesting going on here?

David Chalmers: That’s interesting. I think we need some sociological data here. My experience is that most people can at least get a sense of the problem. So, when I’ve taken surveys on this in various contexts, not terribly rigorously for the most part, but it typically seems to come out that the majority of people see that there’s a hard problem of consciousness, although it’s certainly not universal. So if your experience is that most people have a dominant reaction to deny it, I’d be surprised, but okay, we need surveys on that.

Robert Wiblin: I wouldn’t say it’s a majority of people, but it does just seem like there’s something about the ideology of natural sciences which like wants to deny that there’s something going on here.

It seems like almost this kind of thing that you need like a PhD in a particular field to believe something so crazy as to think that there’s like nothing strange about–

David Chalmers: Yeah, but I also think there are these sociological effects where most people think… we got this on the PhilPapers survey that most people think that most people think a certain thing, even though most people think the opposite. Maybe part of the ideology of science is that there’s no hard problems so most people think that most people deny it. Well in fact most people accept it. In my sense, I may be wrong, because I’m biased and I’m biased in my exposure, even among your average, say, neuroscientist or AI researcher, they can pretty much appreciate the problem.

Now certainly there’s a substantial minority who reject the problem. Even among those who reject the problem, it’s probably about at least half of those who think, intuitively there’s a problem, but we should reject the intuitions. So I would say that is at least being on board with the problem.

Maybe I should actually say something about the problem, which is basically it looks like the question is, “How do you get from physical processing in the brain and its environment to phenomenal consciousness”? Why is it that there actually should be first person experience at all? When looking at the brain from the objective point of view, you can say, “Okay, you can see where there would be this processing. These responses. These high level capacities. But on the face of it, it looks like all that could go on in the dark in a robot, let’s say, without any first person experience of it. So the hard problem is just to explain why all that physical processing should give you subjective experiences.

I contrast these with the easy problems, which are roughly the problems of explaining behavioral capacities and associated functions like language and learning and response and integrated information and global reports. And we may not be able to explain how it is that humans do those things, but we’ve got a straightforward paradigm for doing it.

Find a neural mechanism or a computational mechanism and show how that can perform this function of producing the report or doing the integration, find the right mechanism, perform the function and you’ve explained the phenomenon. Whereas that works so well throughout the sciences, it doesn’t seem to work for phenomenal consciousness.

Explain how it is the system performs those functions, does things, learns, reports, integrates and so on. It seems prima facie all that could go on in the absence of consciousness. Why is it accompanied by consciousness? That’s the hard problem. Now, people who reject this, I think there’s different things going on with different people.

One certainly legitimate move is to say, “I at least accept there’s an intuitive gap here, but somehow we should reject the intuitions”. This can then be spelled out in various ways. I think the most interesting of which is that this whole idea of consciousness is an illusion. A pathology built up by our cognitive systems to believe we have these special properties of consciousness introspectively, even though we don’t. That’s a move I respect. I think it’s got very strong costs. You might have to deny that we have these experiences that seem basically undeniable that we have, but it’s at least an interesting move. On the other hand, if someone comes and says. “I just don’t have the intuitions. I’m not even sure I have the phenomenon that you’re talking about”. Then, I don’t know. I haven’t gotten that reaction terribly often. There are people who claim to be zombies. I think that’s a fairly unusual reaction. I don’t know. You said you’ve talked about this with people in the rationalist community.

Which of those reactions do you think is the most common?

Robert Wiblin: I mean, I think I’ve maybe slightly misrepresented the view. I guess there’s some people who are drawn to this kind of materialist reductionist view or they or illusionism and seem like they view it as much more intuitive to say that there’s like nothing odd about the fact that we feel that there’s something there.

Whereas to me that just seems like that’s a huge cost to pay to say, “Well actually it’s all just an illusion. That what you think is your like phenomenal experience”.

David Chalmers: I think that’s certainly reasonable. It’s certainly the case and entirely reasonable to be drawn towards a materialist and reductionist point of view and to think all this has to be reductively explainable one way or another. So it’s all going to be physical in the end. Maybe I disagree with that in the end, but I think that’s an entirely reasonable point of view, to want it all to be reducible. But that’s at least consistent with saying there’s a problem here that we have to solve.

And I think so maybe the dominant view that I’ve come across is, say from your average scientist, is to think, “Yes, we want to be materialists. There’s gotta be a materialist explanation at the end of the day, but we don’t have it yet. Hopefully someone will figure it out one of these days”, and that’s an entirely reasonable point of view.

Another point of view is to say like, “Yes, I see the intuitions. But I think we ought to dismiss them as delusions. I think, okay, that’s also a respectable point of view, bu