In the 1940s a man named Alan Turing was already developing one of the first designs for a stored-program computer. Years before Turing was at the forefront of an English project that had decoded the Enigma, a code-making machine used by Nazi submarines, using skills he learned studying mathematics and philosophy at King’s College, Cambridge.

Turing is most often remembered as a codebreaker in the general public who helped win World War Two (some go so far as to say he single-handedly saved Britain), especially now that a movie was made about that part of his life called The Imitation Game, where he is played by Benedict Cumberbatch. It’s important to remember that Turing wasn’t just a war hero, he was also a significant leader in early computer science, a passion that originated from his work in mathematics and philosophy which came before and after the war. In 1950 Turing published an essay called “Computing Machinery and Intelligence” which addressed a question I prompted in my last post: Can a computer have a human mind (or, to rephrase, can a computer be sentient)?

Turing proposed that the best way to find out would be to test it against a real human in what he called an “Imitation Game”. The game would be set up like this: A person (it wouldn’t matter who, as long as they can carry on a conversation) would be placed in a sealed room in front of a typewriter and told to write a short letter to the person in the next room. They wouldn’t be told who the person was, and they wouldn’t be able to see the person, so the idea would be to start asking questions about them. The letter would then be taken over to the next room, and the other person would respond, and the two would go back and forth like this for a while. Think of it like a conversation over text message on your smartphone.

The twist is, the person in the other room is actually a computer programmed to respond to the letters in exactly the way a human would. If the computer holds a conversation with the human writer without the human ever noticing it’s not a person, then it would prove, Turing wrote, that the computer thought like a human. This experiment is now called “The Turing Test,” and it’s the main thing we in philosophy of mind take into account when thinking about the possibility of strong AI.

Turing believed that if an artificial intelligence could win his “Imitation Game” it would prove itself to be a functionally human mind. This is because a human conversation demands two kinds of logic: 1. The logic of responding to things with appropriate responses, and 2. The logic of creativity, a way of creating new kinds of responses. Now, every living creature has the first kind of logic, instinct, built into it. The fight or flight response is a good example of this. Most animals, when presented with danger, will either try to physically attack or intimidate it, or run away in order to live. The fight or flight response is a logical response to an exterior problem. The second kind of logic, creativity, is unique to humans and a few other primates. Rather than responding to the same prompt in the same way every single time, we think of new ways to respond. We do it in every conversation we have with each other. For Turing, that creativity is what makes the human mind unique, so that is why for Turing, a computer that passes his test of carrying on a conversation with a human ought to be considered a functionally human mind.

John Searle, a modern philosopher of mind, disagrees. His idea is that simply because a computer can appear to be a human mind doesn’t necessarily make it so. To make his point Searle imagined a thought experiment involving a scenario much like Turing’s. He imagined himself in a room that was entirely sealed except for a single envelope slot and a massive book (called a Rulebook) of untranslated Chinese questions and responses, numbered and sorted for his convenience. Every ten minutes or so, a letter will slide into his room through the one slot with a sentence written on it in Chinese. Searle’s job is to look through his book to find the sentence, to draw the response the book shows him on a sheet of paper, and to send that response back through the envelope slot. The result is Searle, without having any actual understanding of Chinese himself, engaging in a text conversation in Chinese with someone outside his room. Essentially, Searle is involved in a Chinese Turing Test.

What Searle’s “Chinese Room” thought experiment shows is that the computer (or the man in the room) doesn’t even have to actually have an understanding of the language of the conversation in order to produce responses that make sense. It simply follows the program that was written into it, and the quality of its responses are determined by how comprehensive the writing of the program (the book Searle has) is. For Searle, the ability to function like a human doesn’t make the machine human in fact, just like the man in the “Chinese Room” doesn’t in fact speak Chinese despite being able to carry a conversation in Chinese. At the center of the debate regarding the Turing Test is the question of the function of the human mind. If the way our mind functions (responding to things and creating new ways to respond to things) is what makes us human, then it seems possible that we can create human machines. But Searle’s reconstruction of the Turing Test is accurate, then it seems like what makes us human can’t be the way our mind functions, but must be some other feature. Now, unfortunately, I have to throw a wrench into this already complicated debate. The question of whether or not computers can be human is the perfect mirror image of the question of whether or not humans aren’t simply computers.

Can we prove that our minds aren’t exactly the same kind of thing as Searle’s “Chinese Room”? All this time we’ve been simply assuming that humans really do have the second of the two kinds of logic I wrote about; the ability to create new responses. How can we prove that our minds don’t have the real-life equivalent of the Rulebook in the “Chinese Room” thought experiment? At the moment, I don’t think we can prove it. We cannot think of something without being caused to think about it, and we cannot think in a way that is completely foreign to us.

For example, stop for a moment and try to think of something you have never seen, heard, or read about before. Or better yet, try to describe an elephant in a language you don’t speak. If you’re finding this impossible, it’s because the information you need to do those isn’t in your brain yet. It hasn’t been written into your Rulebook, your mental language that you rely on to interact with the world around you.

The human mind operates on cause and effect; as far as I can tell we cannot produce a particular effect without being prompted by a particular cause. That sounds an awful lot like a system that relies on a Rulebook like Searle’s “Chinese Room”, which would have to be updated with new information before it could respond to a completely new kind of prompt. In fact, when you put it that way, the human mind sounds an awful lot like a computer program.