Steven Pinker is an experimental psychologist who conducts research in visual cognition, psycholinguistics, and social relations. He grew up in Montreal and earned his BA from McGill and his PhD from Harvard. Currently Johnstone Professor of Psychology at Harvard, he has also taught at Stanford and MIT. He has won numerous prizes for his research, his teaching, and his nine books, including The Language Instinct, How the Mind Works, The Blank Slate, The Better Angels of Our Nature, The Sense of Style, and Enlightenment Now: The Case for Reason, Science, Humanism, and Progress.

Steven Pinker: I think that the arguments that once we have super intelligent computers and robots they will inevitably want to take over and do away with us comes from Prometheus and Pandora myths. It's based on confusing the idea of high intelligence with megalomaniacal goals. Now, I think it's a projection of alpha male's psychology onto the very concept of intelligence. Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn't tell you what are those goals are. And there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power, it just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process.

Which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callus to those who stand in their way. If we create intelligence, that's intelligent design. I mean our intelligent design creating something, and unless we program it with a goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction, particularly if, like with every gadget that we invent we build in safeguards. I mean we have cars we also put in airbags, we also put in bumpers. As we develop smarter and smarter artificially intelligent systems, if there's some danger that it will, through some oversight, shoot off in some direction that starts to work against our interest then that's a safeguard that we can build in.

And we know by the way that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they're called women. This may not be a coincidence that the people who think well you make something smart it's going to want to dominate all belong to a particular gender. I think that the arguments that once we have super intelligent computers and robots they will inevitably want to take over and do away with us comes from Prometheus and Pandora myths. It's based on confusing the idea of high intelligence with megalomaniacal goals. Now, I think it's a projection of alpha male's psychology onto the very concept of intelligence. Intelligence is the ability to solve problems, to achieve goals under uncertainty. It doesn't tell you what are those goals are. And there's no reason to think that just the concentrated analytic ability to solve goals is going to mean that one of those goals is going to be to subjugate humanity or to achieve unlimited power, it just so happens that the intelligence that we're most familiar with, namely ours, is a product of the Darwinian process of natural selection, which is an inherently competitive process.

Which means that a lot of the organisms that are highly intelligent also have a craving for power and an ability to be utterly callus to those who stand in their way. If we create intelligence, that's intelligent design. I mean our intelligent design creating something, and unless we program it with a goal of subjugating less intelligent beings, there's no reason to think that it will naturally evolve in that direction, particularly if, like with every gadget that we invent we build in safeguards. I mean we have cars we also put in airbags, we also put in bumpers. As we develop smarter and smarter artificially intelligent systems, if there's some danger that it will, through some oversight, shoot off in some direction that starts to work against our interest then that's a safeguard that we can build in.

And we know by the way that it's possible to have high intelligence without megalomaniacal or homicidal or genocidal tendencies because we do know that there is a highly advanced form of intelligence that tends not to have that desire and they're called women. This may not be a coincidence that the people who think well you make something smart it's going to want to dominate all belong to a particular gender.