We ask several roboticists, AI workers, SF writers, and other techie types a simple-minded question. Is a Terminator-like scenario possible? And if so, how likely is it? The results are below:

David Brin

David Brin is a SF and non-fiction writer. Among his most influential books are The Uplift War, Earth, The Postman, and the non-fictional The Transparent Society

Of course such a calamity is possible, and nightmares are great fun, in fiction and film. Still, look at the premise. Superficially, the lesson is the same one that the late Michael Crichton taught, every time: “If man sticks his hand where it wasn’t meant to go, it will get cut off!” It is the old warning against hubris, as ancient as Gilgamesh. But look closer. From Terminator to Jurassic Park to The Matrix and so on, the real back-story is that the terrible new mistake — like AI or resurrected dinosaurs — was done in secret… and as stupidly as possible.

It’s easy to see why this is done so often. A director’s #1 need is to get the hero into dire, pulse-pounding jeopardy as quickly as possible! Preferably against some overwhelming authority figure for the audience to hate, and for the hero to bring down, with little more than guts, defiance and sheer will. Hey, I can dig it. I’ve gone to that well myself. And the surest trick is to assume, from the start, that civilization failed. That nobody blew a whistle, no professionals checked things out, no institutions functioned and that masses of bright citizens never had a clue. Hey, it could go down that way!

Still, must that lazy assumption underlie every action epic, always and without exception?

In fairness, some directors give an occasional nod toward a civilization that’s not filled with clueless morons. Spielberg, Cameron… and New York natives stand up for Spiderman in every film. And the resulting films are more interesting. In the Terminator world, it finally does boil down to the shared citizenship that I talk about in The Postman. Everybody, your neighbors, standing up together and making more of a difference than any band of gods or demigods.

Resource:

David Brin’s Official Website

http://www.davidbrin.com/

Ben Goertzel

Ben Goertzel is an AI researcher, head of Novemente LLC, Director of Research of the Singularity Institute for Artificial Intelligence. AI columnist for h+ magazine

Your question is not that well defined!

First of all: Anything is possible in my world-view… it’s all a question of probability…

But the Terminator scenario involves various aspects of differing degrees of plausibility (hence differing degrees of estimated probability)

Backwards time travel? Maybe. Many physicists feel it may be possible.

Backwards time travel, that doesn’t come along with forwards time travel, and only transports you if you’re naked (and preferably studly)? A bit less likely I’d suggest…

Robots as smart, humanlike and hard-ass as the Terminator? VERY possible, no question.

Some SkyNet analogue taking over the world? Well, if someone built a global computer security system and intentionally made it highly intelligent, autonomous and creative… so as to allow it to better combat complex security threats (and ever-more-intelligent computer worms and viruses) … well, perhaps so. It’s not beyond the pale. A narrow-AI computer security system wouldn’t spontaneously develop general intelligence, initiative and so forth…. but an AGI computer security system might… and the boundary between narrow AI and AGI may grow blurry in the next decades…

Resources:

Ben Goertzel Home Page

http://goertzel.org/

J. Storrs Hall

“Josh” Hall is President of Foresight Institute, author of Nanofuture: What’s Next for Nanotechnology, fellow of the Molecular Engineering Research Institute and Research Fellow of the Institute for Molecular Manufacturing. His most recent book is Beyond AI: Creating the Conscience of the Machine.

On the face of it, it’s ludicrous. Why would a supposedly intelligent network mind waste so much energy and resources indulging in cinematically grandiose personal combat in grim wastelands with loud music? If it, for some reason, wanted to kill off humanity, it would just whip up a thousand new flu strains and release them all at once — and use neutron bombs to clean up.

On the other hand, if all you mean is are the robots going to take over, it’s more or less inevitable, and not a moment too soon. Humans are really too stupid, venal, gullible, mendacious, and self-deceiving to be put in charge of important things like the Earth (much less the rest of the Solar System). I strongly support putting AIs in charge because I’m dead certain we can build ones that are not only smarter than human but more moral as well.

Resources:

Autogeny

http://autogeny.org/

Beyond AI: Creating the Conscience of the Machine

http://www.amazon.com/Beyond-AI-Creating-Conscience-Machine/dp/1591025117

Professor Anette (Peko) Hosoi

Anette Hosoi is Professor of Mechanical Engineering at MIT, noted for her work on the Robosnail

Magic 8-ball answers:

Time travel: Don’t count on it.

Time travel that only works for naked people: Very doubtful.

The internet becomes self-aware and turns evil: Don’t count on it.

T-1000 robots: Reply hazy, try again. Novel self-assembling smart matter is undoubtedly in our future. Many research groups are already developing materials that are capable of healing and replication (two things that biology does extremely well). Imagine smart infrastructure such as bridges and power lines that monitor and repair themselves, or search and rescue robots that can “flow” through debris and around obstacles to reassemble on the other side. There is an enormous potential for incredible new technologies to grow out of these advances in fundamental material science. (Homicidal obsessive robots made of smart matter: Outlook not so good.)

Backwards time travel, that doesn’t come along with forwards time travel, and only transports you if you’re naked (and preferably studly)? A bit less likely…

A robot-filled future: Without a doubt. But these machines are unlikely to look anything like the terminators. So far bipedal robots have been good for show but are largely impractical. The robots of the future will be even more extraordinary and far stranger.

T4 will be wicked awesome: Outlook good.

Resources

“Robosnail”

http://www.mongabay.com/external/robosnail_robotics.htm#1

Bob Mottram

Software developer specializing mainly in robotics and computer vision applications for industrial, aerospace and military applications.

Well, the time travel aspect of the Terminator movies probably isn’t possible, otherwise by now we’d have a lot of tourists from the future coming back to take photos of quaint and carefree early 21st century life, and also to place winning bets. What you’re probably referring to is the idea that there comes a point in time where technology can function more or less autonomously from the people who created or administrated it, and that by some quirk of circumstance the technology comes to view humans as a hostile aggressor or an obstacle to progress which needs to be removed.

I was a teenager in the 1980s and so saw the first Terminator movie, although I must admit that it didn’t have very much effect on me because at that time the “Terminator scenario” just seemed like pure fantasy. If you ask most people who are involved with robotics research or development today they will also dismiss the notion of a robotic takeover as merely an entertaining Hollywood plot device. Despite some advances in the last couple of decades a vast chasm remains between the sorts of capabilities with which robots are endowed in the movies and what even the most advanced contemporary robots can do in reality. The likelihood of a Terminator scenario occurring in the near future, as in the next few decades, seems nominal. This is mainly because a great deal of work remains to be done in order to reach a point where technology becomes fully self-sustaining and can exist independently from human intervention for indefinite periods of time. Even if the robots were to rise up and overthrow us, in the absence of infrastructure capable of sustaining their existence this would indeed be a Pyrrhic victory.

Looking to the longer term future, which might be the late 21st century or beyond, a Terminator scenario would at least in principle be possible if you make a sufficient number of assumptions. In this flighty vision of a future world we imagine that the industrial revolution continues more or less unabated (despite the end of cheap oil) and the relentless march of automation — powered by the never ending quest for greater and greater economic efficiency — extends into all areas of life. Agriculture is fully automated, as is virtually all industrial production, with humans living out little more than a parasitic existence, going along for a free, or almost free, ride. We can safely assume that no significant changes have occurred within human psychology, and that wars still occur from time to time which are mainly targeted at disrupting the machinations of the technological bubble within which mankind has insulated himself. If there is a time when humans are essentially superfluous — merely froth on the technological wave (from the human perspective a kind of comfortable retirement) — then it is at least in principle possible that we could be trivially usurped by a rival species of militant machinery.

Of course it’s hard to make predictions about things that might or might not occur in the distant future, but one thing we can depend upon is that evolution will continue both in the biological and post-biological realms. We may be able to hold rivals at bay by ensuring that we retain control over their ability to reproduce, but in the long term as a strategy this probably isn’t going to buy us very much time. This isn’t a “Judgment Day” scenario though, it’s just another chapter in the varied history of life on Earth, which has already seen countless batons transferred from one species to the next.

I’m not much of a visionary though, and am far more concerned about things which might actually occur within my own lifetime. In the next few decades I think there may be dangers arising from the uses and abuses of robotics technology, in a similar manner to the way that existing computer technology suffers from various forms of abuse. As I write this, many of the industrialized nations are gearing up for telerobotic warfare — robot planes, and an assortment of unmanned ground vehicles. What we’ve already seen with the Predator UAV and Pacbots is just the tip of a very large iceberg. As Illah Nourbakhsh put it in a recent talk, what we should fear in the foreseeable future is not unethical robots, but unethical roboticists (see Resources below). Unlike conventional fighter planes or tanks, telerobots capable of delivering deadly force will not be expensive to manufacture and so will inevitably fall into the hands of non-state actors which may include criminal gangs and cults. As a near term scenario, imagine a cult consisting of a few tens of followers building a hundred telerobots equipped with firearms, then driving them into a city center, under supervisory control similar to a real time strategy game. All of the technology needed for such a dastardly plan exists today, and will only get cheaper and less complex with time.

a Terminator-like scenario is not only theoretically possible, but also practical to fabricate in a foreseeable future.

People love to focus on grandiose gloom and doom scenarios — it makes their own personal troubles appear diminutive in stature — but at least as far as robotics is concerned I think the future is bright, and that the overwhelming majority of robotics applications in the foreseeable future will be peaceful and beneficial.

Resources:

Illah Nourbakhsh Talk

The Streeb-Greebling Diaries

http://streebgreebling.blogspot.com/

John Weng

John (Juyang) Weng is Professor of the Department of Computer Science and Engineering at Michigan State University; member of the MSU Cognitive Science Program; member of the MSU Neuroscience Program; co-founder of the Embodied Intelligence Laboratory and a member of the PRIP Laboratory.

Yes, it is possible. However, this requires further advances of a new field, called autonomous mental development (AMD), which will publish its first issue of the new professional journal, IEEE Transactions on Autonomous Mental Development in May 2009. If a robot runs a task-specific program, its capabilities are very limited. It is not able to deal with any of the complex scenes in Terminator. However, robots that are capable of autonomous mental development are totally different. They are able to develop their internal mental representations and skills while interacting with the physical world, very much like the way human individual develops from infancy to adulthood. The AMD field has recently made some major breakthroughs that indicate that a human-like machine brain is possible from engineering point of view. In other words, a Terminator-like scenario is not only theoretically possible, but also practical to fabricate in a foreseeable future.

Resources:

Juyang Weng

http://www.cse.msu.edu/~weng/

Daniel H Wilson

Daniel H. Wilson completed his Ph.D. in robotics in 2005 at Carnegie Mellon University’s Robotics Institute where he worked under Hans Moravec. He is author of the humor book, How To Survive a Robot Uprising and host of The Works, a series on the History Channel that debuted on July 10, 2008.

Nothing is impossible, but the spontaneous evolution of a super-intelligent artificial intelligence (e.g., “Skynet”) and the subsequent design, production, and employment of a fully autonomous robot army (with “Terminator” model humanoid robots) is unlikely in the extreme. And don’t even get me started on time travel.

On the other hand, I fully expect to see humanoid robots deployed to battle within the next several decades. Terminator-style robots are easily and naturally tele-operated by human soldiers, they can use our weapons and vehicles, and they can naturally negotiate urban environments designed for humans. Best of all, humanoid robots offer a natural means of interaction with potentially hostile locals — because these days war is less about conventional fighting on mass scale and more about cultural awareness. So instead of unmanned robotic drones buzzing overhead, I imagine humanoid robots patrolling the streets wearing local garb, speaking the local language, and obeying local customs.

Vernor Vinge

Vernor Vinge is the science fiction author largely credited with inventing the idea of the technological singularity. Among his more influential books are Marooned in Realtime. A Fire Upon the Deep, the short story collection True Names and Other Dangers, and his most recent novel Rainbows End.

When it comes to movies that depict existential threats, I don’t think Terminator is as likely as classic oldies such as Dr. Strangelove and On the Beach. The possibility of M.A.D. strategy nuclear war has been in eclipse since the departure of the Soviet Union and the rise of the nuclear terrorism threat, but in the future it is a very real risk, whether as an accident, a side effect of other crises (such as global warming), or from diplomatic bungling (such as brought us World War I).

Resources:

Vinge Books, DVDs etc.

http://www.amazon.com/s/ref=nb_ss_gw?url=search-alias%3Daps&field-keywords=Vernor+Vinge&x=0&y=0