The Chicken Little thinking of some folks, and even scientists, that pits humans against robots in some sort of nightmarish struggle for global control has to stop. Making matters worse is the fact that we now have a bunch of learned computer scientists discussing the risks of artificial intelligence run amok. I have to tell you, at this point, I'm more worried about the lack of intelligence run amok.

According to a story by New York Times tech writer John Markoff, scientists recently gathered at the Asilomar Conference Grounds on Monterey Bay for a conference on Artificial Intelligence (AI). If I'm reading this piece correctly, the whole event revolved around discussing robotic intelligence doomsday scenarios: What if robots learned to fake our voices? What if self-replicating malware and viruses take over all of our computers? Is a robot doctor that fakes empathy dangerous? (Since I think most of my real doctors have faked empathy, I'd have to say no to this last one.)

Markoff notes that the AI event organizers (the Association for the Advancement of Artificial Intelligence) chose the Monterey Bay, CA location specifically because of another conference that was held there over 30 years ago. Back then the assembled talked about, among other things, genetically mutated fruits and vegetables. Thank goodness they did. After the confab, we all came to our senses and stopped eating really red delicious apples, perfectly yellow bananas, and super-sized tomatoes. Actually, we didn't. I mean, I guess the 1975 conference was a wakeup call of sorts. If it hadn't happened, deranged cantaloupes might be roaming the countryside today, looking for fresh lamb or small children to eat.

Okay, I'm exaggerating. The 1975 conference only tangentially dealt with genetically-engineered produce. More broadly, it was about the potential hazards inherent in genetic engineering. Markoff notes that the conference actually helped further scientific study. So, it was a course correction of sorts for DNA research. I wish I could say I thought the outcome of this AI confab would be as positive.

What worries me is that so many people think we need a similar sort of correction for robotics development: If we don't control AI, it will control us. Most roboticists I've talked to tell me that sentient beings are at least 50 years or more away. The "singularity" everyone fears (where robot intelligence outstrips human intelligence and we lose control of the world) is not something we'll see this centuryor ever, for that matter. I'm confident that there won't be any Terminators, maid- or sex-bots, or any robots that actually understand or can use human-like emotions in our lifetime.

Humanoid and anthropomorphic robots are not the only ones everyone, including the researches at the AAAI conference, are worried about. As the article notes, robotics and AI are already everywhere. You'll find the technology in our homes, cars, industry, and even in war. The computer scientists (indeed, some very smart people) at this AI conference seem to worry that, at any moment, our anti-lock brakes could decide to lock up and try and kill us, or that an iRobot Roomba robot vacuum could, in a moment of pique, suck up the cat instead of simply eating the cat hair the feline left all over the rug.

An even bigger, perhaps more sinister, threat was also discussed at the conference: Malware and viruses can turn our computers into bots, or as Markoff terms them, "worms and viruses that defy extermination." He's talking about polymorphic viruses, which can change at willsort of. They actually can only change on a sort of random basis. So, the malware is programmed to change to avoid detection, but it's not actively evading a specific kind of detection. The two actions are unrelated. We try and detect in one way and the malware changes in a way our software's not programmed to detectyet. The virus and malware would change whether or not we were trying to detect it. That's not artificial intelligence; it's just smart programming. 

A Real Expert

What if I'm wrong about AI and robotics? What if we really DO need to be worriedright now? I've been covering robotics for years, but I don't actually work in the industry. So, I decided to talk to someone who does. Colin Angle is Co-founder, Chairman, and CEO of iRobot. iRobot makes two of the most widely-recognized robots in the U.S. if not the world: The Roomba and the PackBot. The latter is currently in use in the Iraq and Afghanistan wars. Angle was particularly blunt in his assessment of the New York Times report: "[It] strikes me as a little exploitive and not sure where the point of the article was going." As for the conference, he said: "I'm fine with people worrying about these sorts of issues, but we're a long way from the place where these guidelines must be put down."

Angle, like me, is frustrated by how people's robot fantasies overwhelm their good sense. Still, when it comes to robotics, it's always been this way. "The so-called futurists have been profoundly wrong about robots for 50 years," Angle told me. When the Jetsons were on TV, he explained, Rosie the Robot was part of it because people really believed we'd have house cleaning robots in just a few years. That was 1962. "We have Roomba now a far cry from Rosie," added Angle.

The MIT grad did offer that the current guidelines for military robots, which require "man in the loop," is a good and necessary thing, and something Angle is not interested in changing.

As Angle told me, it's not a bad idea to have people thinking about these things, but what the AAAI should be working on is how to push the development of artificial intelligence along at an even faster pace. Enough with this handwringing and the need to avert non-existent threats. Listen to nervous Nelly conference organizer and Microsoft researcher Dr. Eric Horvitz: " we have to make some sort of statement or assessment... people [are] very concerned about the rise of intelligent machines."

Horvitz should know better. For one thing, today's robots simply aren't as smart as people think. iRobot's Angle told me, "These machines know almost nothing about their environment, except where a wall is and where something might be moving, incredibly rudimentary." As for the artificial intelligence Markoff and the AI confab attendees are worried about, that's essentially in the realm of science fiction, too. "The amount of understanding and awareness of the environment required to have a robot form a realistic opinion about what is right and wrong is extremely far off," said Angle.

Why is everyone picking on "intelligent" machines, anyway? We've lived with dumb ones for centuries. They've caused their share of accidents, mutilations, and outright disastersusually at the hands of the world's most sentient beings: humans. I can't imagine intelligent machines doing any worse.