Hello, so this is a response for Andy who asked a question that I've been asked before and which I'm sure will be asked in the future, which is where exactly the disagreement between David Deutsch and Sam Harris lay within the second podcast, the day of the waking up podcast of Sam Harris, where he interviewed David Deutsch again for a second time this time specifically about the moral landscape. I think that was Sam's third or fourth book. During that podcast, One of the first things that David said was that it's very difficult to articulate precisely what the disagreement is because they come from such different places when terms of the epistemology and so the kind of vocabulary that David uses as prosaic and as common as it might be, has a very different meaning in the Popperian sense than it does for many, many academic philosophers.And so this has the consequence that anyone trained in academic philosophy, strangely enough, is often particularly poorly placed to really understand the Popperian notion of knowledge and the Popperian notion of knowledge is ironically in some way closer to the common-sense notion of what knowledge is. The academic version of what knowledge is following on from Plato often has these overtones of certainty and justification and foundation and belief and the Popperian idea is far more Fallibalist and anti-foundationalist. It's more about trying to guess what is true and then checking whether or not your guesses are correct. So this idea that they were both speaking different languages as something that David flags early on, and it's humorous in retrospect because when you listen to the entire podcast I've done a few times now, that flag that David plants about the fact they're speaking different languages, highlights every single error that appears to occur throughout the conversation where David will make a point and explain precisely what the disagreement is and Sam doesn't understand and it's no fault of Sam's, not like he's incapable of understanding, but he has an epistemology following Plato that it's possible to have a foundation and indeed it's desirable to have a foundation.And although Sam can make noises that sound as though he's not an infallibalist thinker as though he isn't striving for a particular certain way of thinking, nonetheless, what he says reveals precisely what his internal psychology is on these matters. Although he can probably give a good definition of what fallibilism is ultimately, and this happens on many, many topics. He's not a fallibilist. He's definitely a rational person and he's very reasonable and he's trying to reach the truth like we all are. But as we will see, I've written a few notes of examples where he has, or he explicitly reveals the fact that he's not a fallibilist, that he thinks that it's possible to finally get the answer, whether it's in morality or whether it's in science. So we'll get there eventually and I'll attempt to explain exactly where this chasm of difference is between the two.And this is why it's very difficult for the majority of people who hear the podcast to understand where the disagreement is. Because when they hear words like knowledge or when they hear words like foundation, they're hearing something different to what somebody who has read Popper, who has read Deutsch understands these words to mean. In the last video, I did about "The Beginning of Infinity" was chapter 4 Creation. There's a section there where David speaks about how knowledge is created inside of the human mind. We don't know all of the details, but we know something, we know it can't be spontaneously generated. Now, that might seem like a, again, a prosaic sort of mundane vanilla thing to say, but it means that the bucket theory of mind is false. It means that when you're using words, you cannot download what you think those words mean into the person with whom you're having a conversation.You can't download it from your brain into their brain. They've got a certain understanding of the words that you're using mean. And you've got a certain understanding of what the words you're using mean. And you can try and explain it using other words. But then those other words you use in order to explain the words that you're using might not do the job. And this is an example, this is a wonderful example of this conversation of where David says upfront that the words that he's using, uh, have a different sense to what they have inside of Sam's mind. He flags it but, and tries to explain again and again throughout the conversation. But, and although Sam admits that he is willing to go there with David willing to grant him certain things, he never really does. And so the error never quite gets corrected.And so therefore at the end of the conversation, Sam isn't sure where the disagreement is. The majority of people who have an alternative epistemology, something other than what Karl Popper views knowledge as for example, they think that knowledge is about justified true belief. They think that you need to begin with the foundation and on that foundation then you accumulate knowledge, you build it up. And this is an anti critical vision about how knowledge is created. In the Popperian view, you simply have problems, you can start anywhere at all and you attempt to solve those problems when you have them. When you have ideas that are in conflict with one another by using a critical method, it's a completely different vision. Instead of accumulating and building, you're kind of refining and cutting things down and reestablishing new ideas or improving your ideas. So there's no reason to think of knowledge as this sort of base that you begin with, And then you're constructing towers and then at some point you finish the tower of course as well under that vision. In the Popperian view, it is literally an infinite process has not, not easy to visualize that. I guess people want a visual sense of what's going on there. But we're talking about the abstract growth of knowledge, not the construction of buildings.Also very early on in the conversation, and this is another hint about this, is there are problems with the process here and I guess there are problems with psychology or even linguistics. The problem with the process here is that David very early on says okay. Let me explain what the disagreement is and then begins to preface what he's about to say. But before he gets to the main point, Sam interrupts, and this happens in conversations and so it never quite seems as though David is able to get to explaining precisely what it is except that it's embedded in various other parts of the conversation. And so this again will leave a typical listener perhaps with the impression that David never actually did what he said he was going to do, which is to articulate where the disagreement was. He does so, but it, it occurs some many minutes after he says, now I'm going to explain what the disagreement is because Sam interrupts as a natural conversation would have these interruptions.Okay. So the Popperian vision is that knowledge is always conjectural it is guessed. And so when we're talking about what will increase the wellbeing of conscious creatures, which is what Sam is concerned about as a kind of if not the foundation, then the purpose of morality, the purpose of morality is to maximize the wellbeing of conscious creatures. However, if we are going to attempt to maximize the well being of conscious creatures and then we're kind of guessing what their states are going to be and we will come to, and we can also come to see later on that Sam sees the wellbeing of conscious creatures as intimately tied to their biology that he doesn't take seriously the notion of substrate independence, let alone the universality of human beings. Human beings have a universal mind. And so, therefore, their wellbeing cannot depend upon neuroscience, their neurology, it cannot depend upon the particular makeup of neurons inside their brains.So instead, just to preface, what morality really consists of, it's about solving moral problems. And in order to solve moral problems, we have to conjecture explanations about what might improve things. And they can always be false. We can always criticize them. And that includes any starting point that we might have if we think that we need to start with the wellbeing of conscious creatures that could change, especially in so far as we refine what we mean by conscious creatures. Now another thing that I got the sense that Sam might have been hinting at, and this was early on, is he wants to defend the thesis. That moral truth exists. And I'm with him there and I think David's with him there as well. Moral truth absolutely exists and it exists in a similar way to the way that mathematical truth exists. And at some point, I think David Actually does indeed mentioned that there's this kind of objectivity to mathematical truth and to moral truth, neither of which are dependent upon the truth about physical reality.Physical reality is a separate kind of objectivity to what mathematical reality is. There can be truths in mathematical reality that do not depend upon what's going on in physical reality. One such truth is that, for example, the decimal expansion of Pi is infinite, but it is impossible in physical reality to represent that anywhere because there simply aren't enough atoms in the universe or even the multiverse. You need a literally infinite number of particles, an infinite number of different states of the universe in order to represent this decimal expansion of pi. But the infinite decimal expansion of Pi exists out there in abstract mathematical reality. It's a thing. So the other thing here is a distinction between abstract, objective, abstract, objective, ontological reality that's out there. Uh, in terms of mathematics or indeed in terms of the laws of physics, whatever the true laws of physics actually are, they exist.They absolutely exist, but our knowledge of those laws of physics, like our knowledge of mathematics or our knowledge of morality. So this is the difference again, between ontology and epistemology. Ontology is what is true in reality. Now, what is true in reality, we don't know. All we have are fallible explanations of that reality. The fallible explanations of that reality are not that reality. This is true of physics where the laws of physics absolutely exist. They're really real. We don't know what they are exactly. We have approximations to them. And so for example, we used to think that the law of gravity was Newtonian. Newton's law of gravity, which looks like f= gm1m2/r2. We now know that's false and works within a certain domain. It's extremely useful for solving particular problems, but ultimately it's false and you can do experiments to show that it's false and it fails in certain regards.The same is true of mathematical truth, mathematical ontological truth is out there. What we have are explanations of that ontological truth. So we have epistemic claims, okay? Which are just things that we write down, things that we understand in our brains about that reality. It doesn't matter what the domain of inquiry happens to be, whether it's science or mathematics or morality, there are truths in each of these areas. They are part of reality, but we only have, because we are fallible humans, we don't have direct access to any of them. We're fallible. And so when Sam says that moral truth exists, it's not entirely clear whether he thinks that we have direct access to it. And at times I think that he thinks we may be able to get that truth in hand. I'm not convinced by that. Again, I'm a fallibilist as David is a fallibilist, so this is one of the areas of disagreement when people might, the noises on the subject as if to say, we're nearly there. We're about to find out what the truth happens to be or in some distant future, we're going to know what the truth is.So I'll just move on to, I'll just move onto foundationalism. So foundationalism is this idea that you begin with the foundation and then on that foundation, you build up the rest of your knowledge, but it's the idea you need to begin somewhere. You've got to start somewhere, you know, your axioms, your premises, and from that you can derive everything else that you need to know that this is a platonic mistake. This idea that you just begin here and then you justify, justify, justify, justify, and eventually you keep on justifying your reach to the end because you found everything that needs to be understood. This is completely the opposite in many, many senses to what Popperian epistemology is about and about how knowledge is constructed.In reality, there's no reason to begin here because we don't know that there is an absolute truth. We don't have that anything's absolute truth. And so because of that, all we can do is to guess what might be the case in order to solve any particular problem. We don't have to worry about what the foundation is. There's no bedrock. And even though there's a final reality out there for many, many reasons, we can't ever get to that final reality. This is part of the conception of "The beginning of Infinity." You can always correct errors there. There is an infinite amount to learn. We are fallible. And so you can never be sure that whenever you've discovered something that appears to solve a problem, that you're not going to find errors with it.So with Sam, when it comes to morality, he wants two kinds of foundations. He wants to talk about morality as being about the wellbeing of conscious creatures and this idea that in order to establish an objective morality in order to begin somewhere, let's consider the thought experiment of the worst possible misery for everyone. So the worst possible misery for all conscious creatures. And he says that if anything is worth avoiding, it's worth avoiding that. And we can agree that if there's anything worth avoiding, it's worth avoiding that. But there's no need for this foundation. There is no need to begin there because it doesn't help us solve any particular moral problem. It's a response or a critique to either biblical ways of thinking or relativism. Okay. Moral relativism, moral relativism. Moral relativism is this idea that your morality depends upon the culture from which you come or from the family in which you find yourself or from your particular frame of reference.Your particular psychology determines what your morality is. And so Sam is right to want to critique moral relativism. This idea that we shouldn't criticize other cultures or criticize other people for their moral beliefs. And so if there happens to be a culture out there somewhere that thinks that if little girls learn to read, they should be stoned to death, then who are we to criticize that culture. And as Sam says in his very powerful Ted talk on this topic, who are we NOT to criticize such a culture? So Sam wants to respond to the moral relativists by saying, well, let's consider the worst possible misery for everyone. Now a more relativist would say that no such state exists, but he's appealing to people to say you can't go with the moral relativists because the worst possible misery for everyone is objectively bad. And therefore, what is objectively good is any movement away from that state.This cannot be a foundation for morality because as David points out in the conversation later, and also I think I've observed in my response many years ago to Sam Harris on this as well, in "The Moral Landscape" challenge, as soon as you get a small distance away, any distance away from the worst possible misery for everyone. I think David says one millimeter away from the worst possible misery for everyone. Then what? Then what? That foundation that we begin with is of no use whatsoever to allow us to decide what to do next, which is the sphere of morality. What do we do next? What should we do now? And that's if you're a millimeter away, now we're, you know, the year 2018 here on planet earth, we are a lot further away than one millimeter of life from the worst possible misery for everyone.So this was possible misery for everyone is a critique of the idea that there's no difference between good and bad. And as, as a, as a critique of that, it's a good critique, but it doesn't get you answers to moral problems because where we are now, we have an infinite space of possibilities before us. And what do we do next? Depends upon a whole raft of things about what we value, what should we value is also another part of morality. So, Sam has this unalterable foundation He wants to begin with (that I would say the only purpose of which is as a critique of relativism) and the other is the wellbeing of conscious creatures and the wellbeing of conscious creatures insofar as that to the domain within which we want to conceive morality. It has problems for human beings because there cannot be a dependence upon biology.And yet this is the assumption, this is the implicit assumption that operates behind this. And we'll get to exactly why in a moment, but I just want to, sort of fixating on these two foundations. One being morality is about the well being of conscious creatures and the differences between good and evil can be articulated by considering the worst possible misery for everyone. So we've got these two immovable things. Now, this is a misconception because it simply makes the same mistake that religious thinkers make, which is that you need to begin with a dogma. You need to begin with this unalterable foundation and upon these two pillars, that's where you build the rest of your knowledge. These things cannot be criticized, but this is false. This is wrong. And even if your intentions are good; religious people have good intentions.The idea that we need to begin with the 10 commandments, the idea that we need to begin with the fact that God exists or love exists, or that Jesus zoomed up heaven, all that Mary was a virgin, et Cetera, et Cetera, et cetera. People have good intentions in wanting to enshrine dogma in wanting to enshrine a foundation. Now Sam says, oh, he's willing to concede that this could be wrong, that this could be fallible. However, he refuses to admit that morality could be anything other than about conscious creatures. And he says again and again that this is his foundation, so why? So why can't morality be about conscious creatures? Why can't it purely be about conscious creatures? Now, Sam has a pretty forceful argument in the moral landscape about how if we were to consider a universe in which there were no conscious creatures, then that by definition would be a universe without value.Okay, fine. But morality is a sphere of truth of ontological truth. So those ontological truths actually exist. They are out there in abstract reality and so they, they, they occupy that reality even if we can't find out what they are perfectly and we can't, they are independent of the experiences of conscious creatures. So they might be about the experience of conscious creatures, but they are independent of conscious creatures and in particular, they cannot be about the neurology of the conscious creatures. And they can't be about the neurology of conscious creatures because human minds are universal. Even if you could possibly have a non-universal mind. Okay. Like for example, if a cat has conscious states, those conscious states and especially the universal mind of the human being of a person can in principle proven by David Deutsch could in principle be downloaded onto a computer.We could be put into a Matrix at that point, Once our mind, which is a kind of program is put into a silicon computer or whatever the computers of the future happened to be, then it cannot possibly be the case that morality is about anything to do with the biology of the human brain because we won't have biological brains anymore. We'll be a universal explainer inside of some kind of silicon computer. There's a time that half an hour in our account, remember now, but I have to write myself a note because I'm, I thought this was a, a very valuable insight where David says that the criterion by which institution should be judged is by how good they are at resolving disputes between people without violence, without coercion. And he's not saying he knows what they are, but this is an absolutely crucial point about politics and economics and morality.Generally. It means that the scope of government, for example, is extremely limited. That if we want political institutions that work and that are moral institutions, they cannot be coercive. And so when people consider things like the welfare state and when they have good intentions, like replacing the welfare state, let's say with something that's an incremental improvement like universal basic income, nonetheless, this requires some amount of coercion that if Joe over here is not earning much money, however, he is earning just enough in order that you think that he should hand over some of his money to Mary because of universal basic income, then that will require a certain amount of coercion. The only way to avoid that is to allow Joe to give Mary charity to, to willingly, voluntarily do this. But the people who argue for universal basic income in the same way that people who argue for welfare are the same way that people who argue that socialism should obtain or that communism should obtain or any other kind of system in which the government determines where the wealth gets distributed, where your wealth gets distributed, wants to implement a coercive system.Deutsch's criteria here is that The political institution needs to be judged by how good it is at resolving disputes between people without resorting to coercion. And so if you can't reason someone into something, then arguing that we need to use force, especially to extract money or something like that, that's clearly inferior. And because people have, and this is tied to fallibilism and it's tied to this idea about what human beings are, that we're universal explainers that we can come to understand each other. That if you have an idea and it's good and I'm a reasonable person who's a universal explainer, then you will be able to use words and argument and explain to me why it is that your idea is better. Now in discussions about economics and government, it seems to me to be the case that very, very often we end up in a situation where one side throws up their hands and says, well, I don't have, I cannot convince you. Nevertheless, we need to use force here. We need to use a mechanism whereby money is taken from these people and give them to those people, et cetera.Okay, that's a diversion. David further adds this is an important quote, that there's no limit to the possibility of removing evil via knowledge. And so all evils are caused by a lack of knowledge. And so, therefore, he's saying that whenever there's a problem, whenever there was evil or suffering, then what we need to try and bring to bear is knowledge. We need to bring some kind of creative inspiration to that situation in order to find a solution. But coercion can't be the thing. Now I'm going to read a direct quote. I've written something down that Sam says, word for word. It's at the 49 minutes 50 mark. And he says, "imagine this future of a completed science of the mind where we not only understand the brain basis or the computational basis of every possible experience, but we can intervene as completely as we would want. And we now have this machine that I can put on your head and we can dial in any possible conscious state. It's just this perfect experience machine." So that's two sentences. It's a very long sentence. And then a short sentence. But I just want to emphasize this about Sam Harris, who I admire. I think he's got the best podcast out there. I've read all of his books. I think he's a fantastic thinker, but I just want to emphasize that the language he uses is no accident. It's antifallibilist, it is foundationalist. It is not Popperian, And even though he says various points in the conversation that he's willing to concede the Fallibilist point, his underlying epistemology, which shapes his psychology. And therefore the way in which he comprehends the world is here in stark contrast against his explicit statements. So the explicit statements about, yes, I'm a Fallibilist, Yes I'm willing to admit that I could be wrong about this, is very, very different to what comes out when he's just speaking naturally.And so again, he says "completed science," as if we can, we can finally get the final answer. We can finally get there and we will get to a point in science where there will be nothing further to discover a completed science. If he didn't think that was possible, he wouldn't put the word completed in there. He would just say, imagine this future of science of the mind where we not only understand the brain, etc, but he says completed. He also says every possible experience as though that were possible. But as David goes on to explain, it's not possible to have every possible experience. You know written down in an algorithm put into a computer, enumerated in a computer, that that simply isn't possible and it's not possible because that presumes that you can predict the content of future knowledge.So for example, the experience of what will be discovered in a hundred years time. That's a possible experience. The experience of discovering something that no one has yet discovered that experience can't be put in there. So you can't have this machine, that Sam wants where you can put it on your head and dial in any possible conscious state and it can't be a perfect experience machine. Okay. So how would a Popperian rewrite this? I again, I'm saying it's no accident that the way in which he tries to conjure this. So one way that you might write this, imagine this future of a science of the mind where we not only understand the brain basis or the computational basis of the experience, but we can intervene, we'd intervene in any way we like and we now have this machine that I can put on your head and we can dial in conscious states. It's just this experienced machine. Okay. So, that would be fine. I think that that would work in a Popperian sense.But because Sam thinks that you can have this completed science of the mind, that you can have these machines that could have all these perfect states and you can just pick the one that you like the most perfect one, then he thinks that you can get to a peak. And so these peaks on the moral landscape he thinks are absolute peaks upon, which you can make no further improvement. Of course then you, later on, he will say that, oh no, no, I didn't mean that. I mean you can make improvements. Um, so you know, David interjects with that at that point with what I've said. Um, because he says something along the lines of that the vast majority of states, these conscious states that are in this machine will never know because we won't have the knowledge.The infinite majority will always be unknown. So it's a big difference between Sam and David. Sam says we can dial in any possible conscious state and David says the overwhelming infinite majority lies Beyond that, there's a big difference between having every possible conscious state. And saying and denying the fact that you can and in fact, the infinite majority will forever not be known. That's a huge disagreement in terms of quantity. It's the difference between zero and infinity. Um, and David uses the example. There is the experience of knowing tomorrow's scientific discovery, which we will never download. But Sam comes back and he says, oh no, he didn't mean that these conscious states would be finite, but I think that's kind of fudge. Um, either the computer can replicate all the possible states or it can't. Um, and so if it can't, then his machine cannot be a perfect experience machine.It can't be based on a completed science of the mind where you understand all the ways in which, um, the, the computational states or the brain states relate to conscious experience. And the reason if we managed to find a way in which to capture the mind inside of silicon, we'll have, we'll know what the algorithm is for creativity, what the algorithm is for a human brain. But that doesn't mean that we will know what every single computational state, how that, how that relates to subjectivity. Because there can still be an infinite number and uncountable, infinite number of conscious states. Being able to write down the algorithm for creativity doesn't mean we know all the possible outputs of that algorithm. If it's a creative algorithm, we've already had them, right? They're running on our brains right now. Even if we knew what the algorithm for creativity is in our own brain, that doesn't mean that you know what the output is going to be because presumably, part of that algorithm has the quality that it is.I knowledge creator and no knowledge creator can predict the growth of knowledge. That's simply a fact of epistemology because as soon as you create something new, then you're going to find errors in the errors that you're going to find in that eventually, that bit of knowledge eventually depends purely upon your preferences and your free will. But this is another probably a disagreement between David and Sam, but Sam gives this idea that he's wrong, this idea that he's wrong about his thought experiment that you can have this completed science and that you can have all the possible conscious states excited, keep that short shrift. And he wants, he wants to go back to feelings. And this happens a lot in the conversation. He wants to go back to considering how people either feel good or they don't feel good and how you could feel better and how you could feel worse.And so he just wants to consider, okay, well just imagine that you could feel like Mozart did during his best moods or like John von Neumann during his best moods, what it would feel like to be Mozart composing a symphony. Must that not have been a wonderful state to be in. And, and David points out that, well, this idea, and again David doesn't quite use these words, but it's this idea that anchoring morality to pleasure versus pain is misconceived. And when Sam talks about the worst possible suffering for everyone, I think he really has in mental torture or pain at some sort of physical type suffering. And that you could turn up all the pain receptors in a conscious creature and that would be the worst possible misery. And at the other end of the dial, you can just maximize pleasure. And so he starts to talk about pleasure later, but, but he can't decouple this idea of feelings, sensations from morality, which is what David attempts to do here when, they start talking about what it's like, what it feels like to be Mozart.And Sam says, you know, it must be a very happy kind of experience. But David said, well, there's this pleasure. And then there's a joy. And the joy that Mozart would have had is the joy of solving problems in music. So in what way could you download what it's like to be Mozart? Well, perhaps you can download this sensation, but it wouldn't exactly be the joy that Mozart felt because the joy that Mozart felt had a lot to do with the fact that he just solved some problem in music, some problem in composition, and that has a certain sensation associated with sure, But the joy, the enduring feeling of having a solve the problem can only come from having solved the problem. Now, one might want to say, well, you could download that experience, the experience of there being Mozart and solving the problem, but then you would be Mozart. Then you would be Mozart if you're downloading, his entire mind into your own. You're no longer yourself. You aren't yourself having the experience of what Mozart had, you are Mozart because you are solving the problem that he solved you are valuing the problem in the way that he valued it. Everything about you is then tuned to being Mozart. So it can't be the case that you can simply download sensations because happiness is a product of doing something. And so always happiness is about solving problems. And suffering is a condition in which you're thwarted in some way you're unable to the perpetually solve your problems or to solve a particular problem. And it's upsetting. So it's about problems. Morality is about problems. So David is arguing that you can't download the sensation of what it was like to be Mozart or John von Neuman or anyone without recreating that individual person.But Sam insists that it can't, you can have a form of happiness that is independent of problem-solving. And so he, he mentioned a lot of pleasure and he mentioned pain. So he talks about how drugs or medication or certain states during meditation, like enlightenment, can give you a form of happiness or pleasure that is independent of problem-solving. And, and everyone would agree that an opiate high might be pleasurable, but this is a temporary thing. And David points out that most heroin addicts would suggest that the experience of most heroin addicts would precisely be that maybe at first when people try a drug, it's interesting and new because you're having new sensations. But as time goes on, if people are using this drug all the time, it becomes increasingly boring. There's nothing new. And so they might become addicted. And in fact, that will become a real problem that the pleasure is no longer coupled to happiness. The pleasure is now coupled to unhappiness. There is this deep divide here, a real chasm, another real chasm between Sam arguing frequently that morality is about feelings in some way, the wellbeing of conscious creatures, but also his, every example is about some kind of state of happiness or state of pleasure or state of pain. And David wants to say that instead, morality isn't really about that. It's about solving moral problems. And so these are two different ways of viewing what morality is about emotion versus reason, emotion, and reason versus problem-solving. Now, Sam attempts a thought experiment with the Matrix. The interesting part here for me was when and I can't remember the exact point of the thought experiment, but the interesting part for me was where, Sam said that the morality of the people within the Matrix, if you're, if you're inside this matrix and you're just having a great time, the morality of the people within that computer program isn't relevant because they're not real people.David says, well then that means that they're not creative. So it wouldn't be a pleasurable experience to be inside of this matrix type computer world. Because what you want inside of this matrix type heaven, let's say we could make a matrix that was heaven like, but Sam saying that you could do anything you like with the people that are there because the people that are there aren't real people. And David says, well then that means they're not creative by his definition of what a person is. A person is a universal explainer. A person's a creative thing. Now, if they're not creative, if these people in sort of matrix and creative, then they can't collaborate with you on any of your problems. They can't really help you with any of your problems really because they're not able to contribute to your problem situation because all they have is a finite set of responses that they can probably give you like a non-playable character inside of some computer game. They're not really going to be able to help you very much any more than Wikipedia can help you. But if it's a real problem in your personal life, Wikipedia may or may not be helpful, but really what you want are the people. Ultimately we want other people. We need to collaborate in order to get our problems solved and our problems are important in so far as there are other people that are around us that we want to solve their problems. We want them to help us anyway. We need other people. Now, David says that because of this, because these other non-real people aren't creative, that we'd eventually notice that if we were inside of this matrix. That's kind of this heaven-like Matrix. Then we quickly notice we'd noticed that they aren't responding like normal people are and Sam in response to this goes, "right."He seems not to get it or doesn't buy that argument. And to me, this is another chasm of difference between them, between the conception about what I person is. So Sam kind of things and this is the prevailing conception that what we have computer programs that are maybe artificially intelligent in some sense and we have people and maybe we'd want to agree that artificial general intelligence, the person, but then there's kind of this continuum between the two and then maybe there's something further beyond artificial general intelligence. But this is simply false. You have only two states. It really is a binary thing, right? People, I'd like this as how you are black and white thinking, kind of thinking, shades of dry of course. But not in this situation. It's the difference between black and white. It is here because we have things that are not creative and things that are, there are not partial degrees of creativity. Either you can tackle a problem because you're a universal explainer or you can't. And so, so this is a difference as well. Okay. That, that you have people and things that aren't people. You have general-purpose explainers and things that are not general purpose. We are general-purpose explainers and we want to interact with other general-purpose explainers, other people. They are What's valuable in this world. They are the ones that are likely to be conscious. They're creative, they've got free will. Sam Won't like that. But creativity is the thing that makes a universal explainer a universal explainer and you quickly notice if something's not a universal explainer. It's not going to be able to give back to you in the same way. It's not worthy of the same kind of love and compassion and fun times universal explainers or other people are.So there's a real difference here. There's a real chasm of understanding once more between Sam's idea about the centrality of people and David's, okay, then, then we get into a section that has a little bit of a diversion. I think more of a distraction from the meat of the disagreement, Sam talks about meditation. It's utility and you know, I'd agree. I'd agree with Sam. Meditation is a very useful, pleasurable thing. It has a whole bunch of benefits and he says that that state is a state in which you can have happiness being not solving problems. And I profoundly disagreed and I was so glad that at this point David jumped in and said, well, you know, that could be because it might feel as though, and then this is the subjective feeling, And so this is when Sam talks about meditation, he always talks about the subjective feeling side of it. And yes, as a subjective feeling side of it, it doesn't mean you can't be wrong about your own subjective feelings, but what David contributes to this as a very Deutsching response to meditation, even though he says he's very experienced in the area himself, he says, well, the pleasure that one might get from meditation might be because you kind of dampened down your conscious state and maybe in dampening down your conscious state, you allow your unconscious mind and your unconscious mind is real. Your unconscious mind is there and it's attempting to solve problems as well. But sometimes the conscious and the unconscious probably have these interactions where there could be obstacles or blocks where your conscious mind is just getting in the way of your unconscious mind.And meditation might just cause your conscious mind to relax for a while, to go to the background for a while and allow your unconscious mind to do its thing. And then it can solve problems unconsciously so that when you go back to your conscious mind, suddenly you feel a lot more creative. And Sam admitted that this is indeed, subjectively the case, that he's had the experience of feeling as that he's far more creative after having meditated. So admitting that, that the pleasure of meditation really is cashed out in what happens after the meditation. Namely that problems begin to be solved after the meditation at a rate greater than what they were before. And he said that this is one of the reasons that culture has taken on meditation as being an important tool because people recognize this, that if people are stressed or feeling bad or depressed, then meditation can be a very good prescription to help, that sort of thing because it allows the problems that are normally there to be dropped.But you don't simply drop them such that they disappear. You dropped them such that your unconscious mind during the meditative state can do its thing, whatever that thing is. So that at the other end, your conscious mind, once you come out of the meditative state, maybe it's days, maybe it's weeks, whatever, is able to work better on solving those problems. So it does come back to problems. There isn't this state of pleasure with respect to morality that is completely independent of problem-solving. And so it's a key point. It's a key point. And so what I'd say they're about the meditative state, I've tried to mention this before when Sam's talked about it, is that a lot of what's going on in the meditative state, this feeling of the divestment of "I", the feeling that you no longer an I, you're, you're a witness of your conscious experience as Sam would say, that you really do feel kind of a distance between you and objective reality or even you and your thoughts.You look at thoughts as objects, fine, great. But this is only to say that that state is very difficult to describe. It's exactly the same as the difficulty of trying to describe what the color blue looks like to you, to someone else who's looking at the same sky, let's say. So it's the difficulty of trying to articulate what qualia are alike. We don't know-how. And that's all that, that is, that this inexplicit type knowledge that we have, we know we have subjective knowledge. We know what the sky looks like. I know what the sky looks like, but I can't put it into words. It's inexplicit and the same is true of the meditative state. And this is a mystery for now, but it's just a problem. Like how, how do we do it? Okay, well we can't do it now. I don't see it as being some sort of reason to think there's something massively spiritual or hugely mysterious about this area. It could be the case, but it could just be a mundane problem. It might just turn out to be a mundane problem. Someone hasn't thought of the solution.Okay. So again, David says that moral theory should be approached like scientific theories. They don't need foundations. They don't need foundations. There are a lot of theories out there, a lot of moral theories like, Kant's categorical imperative, or Rawl's fairness or stuff that comes out of the Bible the golden rule et cetera, et cetera. Whatever your moral theory happens to be or indeed Sam's wellbeing of conscious creatures. All of these, these principles, these ideas, these theories should be seen as critiques, as critiques of each other or as critiques of any other theory that someone proposes or as a critique of a solution that someone proposes.They shouldn't be seen as foundations from which you begin to build up everything else. And in response to David saying that these, these particular theories, these famous theories from utilitarianism to Kant's, categorical imperative to sort of believing in natural law, et Cetera, et cetera. In response to this, Sam says, let me recategorize my foundation and he goes on, so he hasn't heard what David has said about foundationalism or insofar as he heard it. He's heard something different to what David was trying to impart to him the receiver of the message and the sender of the message; the message that the sender sends is not the message that the receiver is guaranteed to get. It depends upon error correction. And there's no good way to know what the method of error correction is in order to ensure that someone's idea or your idea gets into someone else's mind. We don't know how to do that. We always make mistakes. And so I think this is another example here that Sam just says it off the cuff. It's part, it's part of his psychology, part of his vocabulary. It's part of his way of viewing the world, of thinking about these matters, about thinking about morality and epistemology. That you have a foundation. You need a foundation. He doesn't understand that you don't need one. That's how he continues to return to it. And he just wants to re-explain it. And so he said the David, they'll just let me re-explain my foundation. You're not accepting my foundation and I know you're against foundationalism David. But let me re-explain my foundation. I can't be the only one that sees the problem there.So, and just to emphasize that this comes from, from chapter four of "The Beginning of Infinity" where David really articulate this to understand stuff, to learn, We human beings have to conjecture an explanation. Sam, on the other hand, is working with foundational as sort of antifalliabalist framework. Although he says he explicitly isn't doing that, he doesn't need to do that. To me it sounds very much like a person who says something like, I'm not religious really. I'm not religious, but now let me explain to you the divinity of God and how Jesus zoomed up to heaven and how I go to mass every Sunday. So on the one hand, the person saying they're not religious and on the other hand, with every utterance they make, they are articulating and announcing to the world the ways in which they're religious. And so yeah, Sam is saying he is a fallibilist and he doesn't need these foundations, but he's explaining again and again and again what he's foundations are.This is the big difference. This is the huge difference of opinion that they have. That one person is arguing that a foundation is important. And the other one is saying that's the very mistake that you're making. And the other one then goes back to argue and say, well, if you don't like that foundation, let me expand the other one. And David is saying, well no, you don't need the foundation. You're not quite understanding what I'm saying. And Sam goes, well, okay, well if you don't like that foundation, let me re-explain what the foundation is and see if that will convince you. And so this is why there's not, there's not as much progress on that front as what there might have been because they, they're not agreeing on what the word means. I don't know what Sam is hearing exactly, but he's hearing something like, I just don't know what, what Sam is hearing at that point. because one point Sam even says something like you need a foundation like even Popperian science needs a foundation. And David tries to say, well, no, that's wrong. David says the idea that we need to start anyway is false. Everything is criticizable. You don't need to start in a particular place. You don't need to start them here or up there or anywhere else. What you have are moral problems. And then you need to approach those moral problems, with a critical eye in the same way in science. If David has a problem in quantum computation, he really doesn't need to look at what the foundations of all of science happens to be or what all the physics happens to be. Now, there might be ways in which they can critique his solution to a particular problem in quantum computation.Let's say, for example, he decides to create some algorithm to run on a quantum computer, but the algorithm requires that parts of the quantum computer exceed the switch that required a quantum computer to have switching speeds that exceed the speed of light. Relativity would be a critique of them that it wouldn't be possible. And so that could be ruled out, but it's not like he always has to begin with some particular set of facts. And then from that build up if he has a problem. So namely if there's a problem anywhere in science, then you solve that problem without being too concerned about what else is beneath that. Again, there's a point where Sam gets into feelings again, and so David comes back with, or if you consider someone like Isaac Newton, Isaac Newton would have been quite happy when you're solving problems.No doubt that state of mind he was in was just as happy as anything that people today have. Just as happy as when David invented the theory of quantum computation or Edward Whitten solves a problem in string theory. You can, would have had that when he found the universal law of gravitation, let's say. However, his State of comfort would have been God awful. It would have been terrible. His clothes would have itched. His food would have been terrible. His bath would have been cold, he would have been cold. There would've been a whole bunch of reasons why he would have been uncomfortable compared to us and so it can't be due to comfort or pleasure or the absence or presence of pain that this idea of happiness can be cashed out. Our happiness is kind of independent of those things. It's about solving your problems. Whatever your problems happen to be, and it doesn't need to be this snobbish thing of like you need to find the next universal law of gravitation. It could be anything in your own personal life all problems are parochial. It's just what you happen to be interested in. But so long as you're solving them, then that's what will make you happy. As long as the problems are interesting and worthwhile, that they don't need to be this profound type thing and it doesn't need to be anchored to creature comforts in the far distant future, people will look back and think how uncomfortable we are. Here I am sitting in this silly share. Maybe in the distant future people will look at a video like this and go, oh my God, I used to sit in chairs like that. How ridiculous without flooding around on clouds, how could that, those poor people? Sam says at some point that there might be aliens out there that have available to them. States of mind, states of pleasure, we do not have available to us.The biology of our mind might foreclose certain states. And again, this is a profound misunderstanding of David Deutsch's understanding of what a person is. David Deutsch's discovery of what a person is as a universal knowledge creator. Given that our minds are universal, there can be no such state because our minds can access any site. It doesn't depend upon our biology. Sam insists at this point in the conversation that it does depend on [inaudible] biology, but it doesn't and it can't because our, our minds are substrate independent. We can one day be downloaded into computers. So we are not foreclosed about having certain experiences. Our minds are already universal. We don't even need more processing speed or more memory power in order to do this. Okay. Our minds are universal. If we did need more memory, well then let's hook ourselves up to the computer.But I find that completely implausible and David points out that, discoveries in morality must always create more problems, Sam mentions that, well, this is just a local wrinkle. Again, this idea that in the distant future, Sam has this conception that we will be in a less problematic state than what we're in today and it's just wrong. he thinks that we'll be almost there. We'll just be ironing out the wrinkles again, this antifallibilist notion that he doesn't really take seriously the beginning of infinity. That even then will be at the beginning of infinity. Even then when he thinks it will be almost at the peak, like David says, well, we will just see more problems from that point. They'll still be existential problems. How can we get rid of all the existential problems? How can we stop all those stars exploding? Well, maybe we can, but once we do that, how can we stop the universe expanding and so on. The problems will always be there, will always discover something new and when you discover something new, you end up finding a whole bunch more problems. And this funny moment where David says he doesn't see a reason why there should be a limit on the size of mistake we can make. And Sam laughs at that but I think, I think that Sam's kind of laughing nervously, it's like a reflex because he doesn't have an answer to that. He kind of gets it. He's a very intelligent person and he understands that indeed there's no maximum size on the problems we might make even in the future. And so things could go terribly wrong even then. And so I think he kind of almost gets there and realizes that, oh, you can't really be on a peak because if you're on a peak then you've kind of solved everything. But that, that's not possible. You can still make mistakes. We're fallible people.With 10 minutes to go in the conversation. Sam still asking what the disagreement is. Uh, he mentioned morality as being a navigation problem. David says we'll change what we mean by better and worse. So even if we think right now, we should go this way rather than that way. Well, in the future we might realize that that was a terrible error. That we can change our minds about what is right and wrong. And, and, and David also says that neuroscience can't be very relevant because, and then Sam agrees, but David explains again that it's because the brain is universal. And I'm not sure that Sam quite understands what that means. This is a subtle point. This is very difficult. I don't know that many people really have comprehended the significance of that. The brain is universal. People are universal. Human beings are people precisely because they're universal knowledge creators and universal explainers. They're creative. That's what it means to be creative. David points out again, that if we were in a matrix-like Sam imagined earlier on, then no moral question would have anything to do with science because that Matrix, this program, this fake program that would all be uploaded into, wouldn't have physical laws that are instantiated in physical reality instead It would be based upon simulated physical laws. So any decision you made there wouldn't have anything to do with science. They would have to do with the laws that are instantiated inside of the program. Whatever the programmer had decided to put in there. That's what if you think that morality is anchored to science, then you'd have to admit at that point, morality then becomes anchored to the rules of the program, the rules of the computer game or the matrix that you're in. And again, that can't be true because morality has an objective reality beyond the matrix, beyond our physical reality as well. Even though the truth is about physical reality might be relevant at times to morality, they can't get, you can't use morality can be derived from the laws of physics or from the laws of neuroscience or anything else. Sam says he wants to talk more about that another time. So, I'd love to hear that conversation.You should tune into the just the final two minutes of that particular podcast, by the way. As a good laugh, a good joke. So I hope that was some way to try to tease out what the differences are, I think there were a number of differences there, not least of which is foundationalism and this conception about what morality is and that when you've got moral theories, really the morals theories that we often talk about, like natural law and utilitarianism and the golden rule, etc. That really, these should be seen as critiques, critiques to use in order to find out what is wrong with other theories or with other proposals within the domain of morality. But morality is about solving moral problems. Just the way that science is about solving scientific problems. I hope that was useful.