Ray Kurzweil responds to Mitch Kapor’s arguments against the possibility that an AI that will pass a Turing Test in 2029 in this final counterpoint on the bet: an AI will pass a Turing Test by 2029.

Published April 9, 2002 on KurzweilAI.net. Click here to read an explanation of the bet and its background, with rules and definitions. Read why Ray thinks he will win here. Click here to see why Mitch Kapor thinks he won’t.

Mitchell’s essay provides a thorough and concise statement of the classic arguments against the likelihood of Turing-level machines in a several decade timeframe. Mitch ends with a nice compliment comparing me to future machines, and I only wish that it were true. I think of all the books and web sites I’d like to read, and of all the people I’d like to dialog and interact with, and I realize just how limited my current bandwidth and attention span is with my mere hundred trillion connections.

I discussed several of Mitchell’s insightful objections in my statement, and augment these observations here:

"We are embodied creatures": True, but machines will have bodies also, in both real and virtual reality.

"Emotion is as or more basic than cognition": Yes, I agree. As I discussed, our ability to perceive and respond appropriately to emotion is the most complex thing that we do. Understanding our emotional intelligence will be the primary target of our reverse engineering efforts. There is no reason that we cannot understand our own emotions and the complex biological system that gives rise to them. We’ve already demonstrated the feasibility of understanding regions of the brain in great detail.

"We are conscious beings, capable of reflection and self-awareness." I think we have to distinguish the performance aspects of what is commonly called consciousness (i.e., the ability to be reflective and aware of ourselves) versus consciousness as the ultimate ontological reality. Since the Turing test is a test of performance, it is the performance aspects of what is commonly referred to as consciousness that we are concerned with here. And in this regard, our ability to build models of ourselves and our relation to others and the environment is indeed a subtle and complex quality of human thinking. However there is no reason why a nonbiological intelligence would be restricted from similarly building comparable models in its nonbiological brain.

Mitchell cites the limitations of the expert system methodology and I agree with this. A lot of AI criticism is really criticism of this approach. The core strength of human intelligence is not logical analysis of rules, but rather pattern recognition, which requires a completely different paradigm. This pertains also to Mitchell’s objection to the "metaphor" of "brain-as-computer." The future machines that I envision will not be like the computers of today, but will be biologically inspired and will be emulating the massively parallel, self-organizing, holographically organized methods that are used in the human brain. A future AI certainly won’t be using expert system techniques. Rather, it will be a complex system of systems, each built with a different methodology, just like, well, the human brain.

I will say that Mitchell is overlooking the hundreds of ways in which "narrow AI" has infiltrated our contemporary systems. Expert systems are not the best example of these, and I cited several categories in my statement.

I agree with Mitchell that the brain does not represent the entirety of our thinking process, but it does represent the bulk of it. In particular, the endocrine system is orders of magnitude simpler and operates at very low bandwidth compared to neural processes (which themselves utilize a form of analog information processing dramatically slower than contemporary electronic systems).

Mitchell expresses skepticism that "it’s all about the bits and just the bits." There is something going on in the human brain, and these processes are not hidden from us. I agree that it’s actually not exactly bits because what we’ve already learned is that the brain uses digitally controlled analog methods. We know that analog methods can be emulated by digital methods but there are engineering reasons to prefer analog techniques because they are more efficient by several orders of magnitude. However, the work of Cal Tech Professor Carver Mead and others have shown that we can use this approach in our machines. Again, this is different from today’s computers, but will be, I believe, an important future trend.

However, I think Mitchell’s primary point here is not to distinguish analog and digital computing methods, but to make reference to some other kind of "stuff" that we inherently can’t recreate in a machine. I believe, however, that the scale of the human nervous system (and, yes, the endocrine system, although as I said this adds little additional complexity) is sufficient to explain the complexity and subtlety of our behavior.

I think the most compelling argument that Mitchell offers is his insight that most experience is not book learning. I agree, but point out that one of the primary purposes of nonbiological intelligence is to interact with us humans. So embodied AI’s will have plenty of opportunity to learn from direct interaction with their human progenitors, as well as to observe a massive quantity of other full immersion human interaction available over the web.

Now it’s true that AI’s will have a different history from humans, and that does represent an additional challenge to their passing the Turing test. As I pointed out in my statement, it’s harder (even for humans) to successfully defend a fictional history than a real one. So an AI will actually need to surpass native human intelligence in order to pass for a human in a valid Turing test. And that’s what I’m betting on.

I can imagine Mitchell saying to himself as he reads this "But does Ray really appreciate the extraordinary depth of human intellect and emotion?" I believe that I do and think that Mitchell has done an excellent job of articulating this perspective. I would put the question back and ask whether Mitchell really appreciates the extraordinary power and depth of the technology that lies ahead, which will be billions of times more powerful and complex than what we have today?

On that note, I would end by emphasizing the accelerating pace of progress in all of these information-based technologies. The power of these technologies is doubling every year, and the paradigm shift rate is doubling every decade, so the next thirty years will be like 140 years at today’s rate of progress. And the past 140 years was comparable to only about 30 years of progress at today’s rate of progress because we’ve been accelerating up to this point. If one really absorbs the implications of what I call the law of accelerating returns, then it becomes apparent that over the next three decades (well, 28 years to be exact when Mitchell and I sit down to compare notes), we will see astonishing levels of technological progress.