The Goal of Philosophy Should Be to Kill Itself

After giving a talk on computers at Princeton in 1948, John von Neumann was met with an audience member who insisted that a “mere machine” could never really think. Von Neumann’s immortal reply was:

You insist that there is something a machine cannot do. If you will tell me precisely what it is that a machine cannot do, then I can always make a machine which will do just that!

The problem with most philosophy is that it is imprecise, and this leads to centuries of confusion. Do numbers exist? Depends what you mean by “exist.” Is the God hypothesis simple? Depends what you mean by “simple.” Can we choose our own actions? Depends what you mean by “can” and “choose.”

Many philosophers try to be precise about such things, but they rarely reach mathematical precision. On the other hand, artificial intelligence (AI) researchers and other computer scientists have to figure out how to teach these concepts to a computer, so they must be 100% precise.

What does it mean, precisely, to say that one hypothesis is simpler than another? The answer (lower Kolmogorov complexity) came not from philosophy, but from computer science. What does it mean, precisely, to proportion one’s beliefs to the evidence? The answer (Bayes’ Rule) came not from philosophy but from mathematics, and especially from implementations of Bayes’ Rule in AI (Bayesian networks). What does it mean, precisely, to say that one thing causes another? Once again, the answer (Pearl’s counterfactual account) came from computer science.

For millennia, philosophy has been the study of questions for which we don’t really know how to get the answers. Once we know how to get answers about a set of questions, we start calling that set of questions a science. We stopped philosophizing about the heavens when we invented the telescope and started doing astronomy, and we stopped philosophizing about biology when Mendel and Darwin and others discovered how to rigorously test biological theories against experience.

So philosophy has retreated to corners of thought too abstract to be answered so directly. And now, philosophy may again shrink as computer scientists do a better job of precisely defining concepts than philosophers do.

Philosophy may soon become the study of questions that are (1) not close enough to factual data to be answered by the sciences and/or (2) not yet formalized mathematically by computer scientists.

Philosophers can talk all they want about ontology and meaning and truth and free will, but I offer them the following challenge:

Do you know, precisely, what you are talking about? If so, show me precisely what you mean by programming it into a computer. If you know precisely what you are talking about, you will be able to explain it to a computer.

I suspect most philosophers, met with the challenge, will just admit they don’t know precisely what they are talking about. I philosophize about ethics, and I certainly admit that I don’t know precisely what I’m talking about. To mathematically formalize a theory of ethics, I would not only have to be much smarter than I am and be a computer programmer, I would also need a completed cognitive science and the new mathematics required to build Friendly AI.

Much philosophy – naturalistic philosophy, anyway – has this flavor. Consider a question like “What is desire?” The landmark work of the last decade on this was philosopher Tim Schroeder’s Three Faces of Desire. Half the book is neuroscience. The other half tries to clear up the concepts involved and make them more precise, though the book leaves things far short of mathematical precision, something which Schroeder himself seems to lament.

If philosophers want facts about the world or even facts about values, the way forward is to figure out how to hand over such questions to the scientists. If philosophers want to make conceptual progress, the way forward is to figure out how to hand over those concepts to computer scientists so they can make them precise enough for a computer to understand them.

Thus, the way forward for philosophy of mind is to hand over the field to cognitive scientists and AI researchers. The way forward for ethics is to hand it over first to linguists and psychologists of language to figure out what moral claims might mean, and then to other scientists (perhaps cognitive scientists) to figure out the nature of whatever it is in the natural world that moral claims refer to. (And if moral claims can’t refer to anything in the natural world, then moral talk should be recognized as fictional talk.) The way forward for aesthetics looks much the same as for ethics. The way forward for philosophy of language is to figure out how to hand it over to linguists and computer scientists. The way forward for epistemology is to hand it over, also, to cognitive scientists and AI researchers – and that is already happening.

The goal of philosophy should not be to continue to give vague and mysterious answers to difficult questions. The goal of philosophy should be to figure out how to hand over its factual questions to scientists, and its conceptual questions to computer programmers, so that these questions can be answered.

The goal of philosophy should be to kill itself.

Of course, even if everyone agreed with me about the goal of philosophy, I doubt that philosophy would ever kill itself. But by continually handing its questions off to the sciences, philosophy should approach its own death asymptotically.