Wei-Huan Chen

wchen@jconline.com

Los Angeles — day. Traffic, skyline, the hum of modern life. A girl laughs on a swing set. Then, a sudden flash, the sound of scorching. Dissolve to white. Cut to year 2029. Los Angeles is now a lightless, crumbled landscape dominated by killer machines.

This is the iconic opening scene of "Terminator 2: Judgment Day," James Cameron's apocalyptic vision of what could happen if we let machines take over the world. It's a thrilling work of fiction. After all, the 1991 blockbuster's nothing more than the progeny of Hollywood, which makes exploiting our deepest fears a $10 billion business. Or is it?

A growing wave of scientists, engineers, authors and human rights activists are worried there's more truth to a machine-dominated era than we originally thought. On one hand, author and scientist Ray Kurzweil predicts a pseudo-utopian world by 2045, in which humans achieve immortality through an event called "The Singularity." But others, like James Barrat, author of "Our Final Invention: Artificial Intelligence and the End of the Human Era," predict a grimmer future.

"Will it be a handover or takeover? Do we meld with the machines like Kurzweil proposes, or do they simply outsmart us?" Barrat asked. "The short-term problem with A.I. is who controls it. The long-term problem is, can it be controlled?"

Barrat speaks as part of Purdue University's "Dawn or Doom: The New Technology Explosion," a public summit exploring the potential dangers of technology through the lens of science, philosophy, ethics and pop culture.

Scientists believe we are currently in a "technology explosion," an era of rapid and uncontrollable technological growth. Futurists like Kurzweil believe that the exponential growth of technology, as demonstrated by Moore's law — the doubling of transistors in circuits every two years — will lead to incredible achievements.

Barrat looks at it a different way. Around 2002, he was working on a documentary on artificial intelligence when he interviewed Kurzweil and roboticist Rodney Brooks, both of whom Barrat now describe as simply too optimistic. It wasn't until he met science fiction author and mathematician Arthur C. Clark that he saw the picture of impending doom.

"He said, we humans govern the future not because we're the fastest or strongest creature but because we're the most intelligent. When we sharesel the planet with creatures more intelligent than we are, they will steer the future. That rained on my parade because I was much more in the Kurzweil camp," Barrat said. "I was talking to A.I. makers and people who work in A.I.. All of them agreed that, within 100 years, most of the decision in our lives will be made by machines. I was alarmed."

Could an A.I. takeover happen? Barrat admits there aren't examples of super-intelligent A.I. right now. But look at computers that beat grandmasters in chess, or Watson, the A.I. that beat the world's two biggest trivia champions, Brad Rutter and Ken Jennings, in Jeopardy! Watson now instructs nurses as a specialist in lung cancer.

"At bottom, human beings are just a bunch of material things in space. They're atoms and quarks," said ethics and metaphysics expert Mark Bernstein, a featured speaker at "Dawn or Doom." "I don't see any reason why a machine — which is made up of atoms and quarks itself — if there's a certain complexity to that arrangement, why it can't be conscious just as much as we are."

Computers are better than humans — super-human — at not only chess and quiz shows but bridge, backgammon, Scrabble and driving a car. Humans remain better at Go, and much better at processing and recognizing language, speech, objects, handwriting and translating. But say, in 50 years, A.I. has surpassed humans at everything, including scientific research, investment banking, love and poetry. Why would they want to enslave or destroy us, as much of pop culture suggests?

The theories of economist Steve Omohundro suggest that rational agents — self-aware beings with set goals — would potentially place survival and expansion over human codes of morality.

"The tipping point is when you get a self-aware and self-improving machine," Barrat said. "It will need resources, just like we do, to achieve its goals. Energy, money, whatever. It will be efficient with those resources. It will be self-protective — it won't want to be turned off because that will be the worst thing for its goal achievement. And it would work to improve its own intelligence."

There's ample evidence these machines could have weapons. There are 56 countries investing in research on automated military technologies, according to Barrat's book. We already have weaponized vehicles like the Northrop Grumman X-47B, which can fly, fuel, land itself. Researchers at the Georgia Tech Research Institute are working toward drones that could have the ability to make lethal decisions in the battlefield. In 2013, a Campaign to Stop Killer Robots by the Human Rights Watch said swift action is needed "to prevent fully autonomous weapons."

A.I. isn't the only technological threat, scientists say. In 1859, the Earth experienced a massive solar flare, sending solar winds that caused failures of telegraph systems all over the world, even shocking some operators with electricity. With a society that's increasingly dependent on an electronics-based infrastructure, a similar "Carrington" event — NASA predicts a 12 percent chance between now and 2022 and notes that we had a near-miss in 2012 — would be downright catastrophic.

"If we had widespread disruption and possibly even destruction of some of the sensitive electronic devices we're using, what would we do? What kind of fallback would we have?" said Gene Spafford, a leading cybersecurity authority who speaks as part of "Dawn or Doom" at 1 p.m. at Fowler Hall.

But Spafford's more optimistic than the doomsayers. Sure, there's a motley of security concerns that technology brings but that's exactly why engineers, policymakers and inventors are working hard at making sure technology remains safe and in good hands. A malicious A.I.-dominated society? A world controlled by criminally motivated hackers? A catastrophic solar flare that cripples society? These are possible outcomes. But they're also preventable. "These are bleak potential futures that, if we apply ourselves," Spafford said, "we could probably avoid."

If you go

What: Dawn or Doom: The New Technology Explosion

When: 12 to 5 p.m. Sept. 18

Where: Purdue University, various locations

How much: Free

Also: Full schedule and locations at purdue.edu/dawnordoom.