| By

Off the keyboard of Surly1

Follow us on Twitter @doomstead666

Friend us on Facebook

Originally published on the Doomstead Diner on August 23, 2014

Discuss this article here in the Diner Forum.

Your Robot Overlord Does Not Love You

The Three Laws of Robotics, a set of rules devised by science fiction author Isaac Asimov:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.

A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

― Isaac Asimov, “I Robot”

In the process of preparing last week’s overheated screed, I came across an article that, after nearly 4000 words, consideration for my audience bade me defer to another day. That was the fact that Elon Musk, he of Tesla and Space-X, and widely regarded as one of the smartest guys in the room, had concluded that one of the gravest dangers to the continuation of the human race was not nuclear power so much as artificial intelligence.

Consider that for a moment. Or better yet, read the article in the original here. In a couple of reported Tweets, Musk urged that we be “super careful with AI. Potentially more dangerous than nukes,” and “Hope we’re not just the biological boot loader for digital super intelligence. Unfortunately that is increasingly probable.” Musk’s concern was spurred by a book by Nick Bostrom of Oxford’s Future of Humanity Institute entitled “Superintelligence: Paths, Dangers, Strategies.”

The book addresses the prospect of an artificial superintelligence that could feasibly be created in the next few decades. According to theorists, once the AIis able to make itself smarter, it would quickly surpass human intelligence. What would happen next? The consequences of such a radical development are inherently difficult to predict. But that hasn’t stopped philosophers, futurists, scientists and fiction writers from thinking very hard about some of the possible outcomes. The results of their thought experiments sound like science fiction—and maybe that’s exactly what Elon Musk is afraid of.

So what are some of these thought experiments? Bostrom says,

“We cannot blithely assume the super intelligence will necessarily share any of the final value stereotypically associated with wisdom and intellectual development in humans – scientific curiosity, benevolent concern for others, spiritual enlightenment and contemplation, renunciation of material acquisitiveness, a taste for refined culture for the simple pleasures of life, humility and selflessness, and so forth.”

Your mileage may vary, but from Gaza to Ferguson, we find these so-called human values already lacking in much of what passes for humanity. What worries Musk and his oracles are the unintended consequences of building artificial intelligence detached from ordinary human ethics. Future AI might find more value in computing the decimals of pi or insuring its own survival than solving human problems in ways that we might recognize as helpful.

Put another way by AI theorist Eliezer Yudkowsky of the Machine Intelligence Research Institute:

“The AI does not love you, nor does it hate you, but you are made of atoms it can use for something else.”

Without recapitulating the entire article, its point is that it is difficult for programmers to anticipate the instructions necessary to program the ethical dimension and problem solving capability to safeguard human life. On the other hand, we find that in other parts of our military-industrial complex, our tax dollars are already working overtime to create artificial creatures whose purpose is ostensibly benign, but the implications of which are terrifyingly apparent to anyone who has seen Terminator movies.

In a breezy article on Geek Pride entitled, “5 Apocalypses You Are Probably Not Ready For” the authors consider not only technology that enables one monkey to control the actions of another monkey by simply thinking, but also a device they call, “Human Powered, Googlezon Big Spider DroneBotcalypse.”

Now, a robot that can’t be knocked over is terrifying enough. It can also climb stairs and is allegedly powered by your hopes and dreams. Why google are doing this is anyone’s guess, but we can only be lead to assume that it is to take over the world. “Well,” you say “It’s not like they’re trying to watch our every move or anything!” Well… So we have a company that watches everything you do online, records video of you when you’re offline and robots that can walk up the stairs. The only way we can hide is the removal of stairs, and living in treehouses. Wrong. Enter delivery giants Amazon and their patented new delivery system: drones. The drones have been initially designed to eliminate the day long waiting period for Amazon deliveries, shortening the time to a possibility of just 30 mins. Currently the plan is to have them manned remotely by human pilots. so we’re safe, for now. The main problem is what is known in the drone world as “SWaP — size, weight and power. This is essentially a physics problem: The larger your payload, the more lift you need. The more lift you need, the larger your battery has to be, which further adds to the weight, which adds to the power requirements, and so on” (Washington Post, 2013). Essentially what this boils down to is a matter of time and money before drones can carry a bigger payload, such as a 500lbs Big Dog robot. This may seem a long way off, but all Amazon probably needs is a massive cash-injection for the advances to be put into effect. Cash the likes of which Google might have. I give you Googlezon, probable merger of the late 2020s and new owners of the world.

The motorized bison is a creature called “Big Dog” currently developed by Boston Dynamics, under a DARPA grant generously provided by you and me. The ostensible purpose is search, rescue and supply, but…

BigDog is a rough-terrain robot that walks, runs, climbs and carries heavy loads. BigDog is powered by an engine that drives a hydraulic actuation system. BigDog has four legs that are articulated like an animal’s, with compliant elements to absorb shock and recycle energy from one step to the next. BigDog is the size of a large dog or small mule; about 3 feet long, 2.5 feet tall and weighs 240 lbs.

BigDog’s on-board computer controls locomotion, processes sensors and handles communications with the user. BigDog’s control system keeps it balanced, manages locomotion on a wide variety of terrains and does navigation. Sensors for locomotion include joint position, joint force, ground contact, ground load, a gyroscope, LIDAR and a stereo vision system. Other sensors focus on the internal state of BigDog, monitoring the hydraulic pressure, oil temperature, engine functions, battery charge and others.

BigDog runs at 4 mph, climbs slopes up to 35 degrees, walks across rubble, climbs muddy hiking trails, walks in snow and water, and carries 340 lb load.

Development of the original BigDog robot was funded by DARPA. Work to add a manipulator and do dynamic manipulation was funded by the Army Research Laboratory’s RCTA program.

And the news keeps getting worse. Rather than embrace the high ground of “robot morality” imagined by Asimov, we find that the Pentagon is in early days of raising a robot army. The justification is that ostensibly the military is rapidly creating weapons systems that will need to make moral decisions. Current military regs prohibit armed systems that are fully autonomous. Yet the increasing sophistication of military technology demands greater and greater autonomy, and where lives are at stake, machines capable of weighing moral factors. What could possibly go wrong?

The U.S. military is trying to develop and deploy a real life terminator. A research agency associated with the Pentagon has unveiled pictures of a robot that looks and walks like a man. The ATLAS robot is being developed by the Defense Advanced Research Projects Agency (DARPA) and a Massachusetts company called Boston Dynamics. DARPA, known as “the Pentagon’s weird science agency,” is the organization that is stated to have invented the internet. DARPA now has an intensive effort to create robots such as ATLAS underway at their facilities, and a new video reveals some of the latest developments. DARPA has told the press that ATLAS is designed to enter disaster areas such as places contaminated by radiation or toxic chemicals and provide relief. Yet it would also function perfectly on the battlefield.

David Swanson imagines a brave new world of Pentagon robotics:

The Pentagon has hired a bunch of philosophy professors from leading U.S. universities to tell them how to make robots murder people morally and ethically.

Of course, this conflicts with [Asimov’s] first law above. A robot designed to kill human beings is designed to violate the first law.

The whole project even more fundamentally violates the second law. The Pentagon is designing robots to obey orders precisely when they violate the first law, and to always obey orders without any exception. That’s the advantage of using a robot. The advantage is not in risking the well-being of a robot instead of a soldier. The Pentagon doesn’t care about that, except in certain situations in which too many deaths of its own humans create political difficulties. And there are just as many situations in which there are political advantages for the Pentagon in losing its own human lives: “The sacrifice of American lives is a crucial step in the ritual of commitment,” wrote William P. Bundy of the CIA, an advisor to Presidents Kennedy and Johnson. A moral being would disobey the orders these robots are being designed to carry-out, and — by being robots — to carry out without any question of refusal. Only a U.S. philosophy professor could imagine applying a varnish of “morality” to this project.

The Third Law should be a warning to us. Having tossed aside Laws one and two, what limitations are left to be applied should Law three be implemented? Assume the Pentagon designs its robots to protect their own existence, except when . . . what?

Now Big Dog has a buddy to take him for a walk. And in terms of reaction and tone, at least to my taste, these guys have it about right

No, it’s not a souped-up version of Robby the Robot — it’s ATLAS, DARPA’s latest attempt at creating a humanoid robot. Unlike the super-realistic Petman , which was designed to test chemical protection clothing, this 330-pound monster is meant to assist in emergency situations. Riiiight... We’ve seen a proto-version of ATLAS before, but this updated unit can perform a host of new tricks, like walking through rugged terrain and climb using its hands as feet. It has 28 hydraulically actuated degrees of freedom, and of course, two hands, arms, legs, feet, and a torso with some kind of fancy-ass monitor on it that probably goes “ping!” every once in a while. Its head is equipped with stereo cameras and — ahem — a laser finder. Eventually, DARPA says the 6-foot robot will use its articulated and sensate hands to use tools designed for humans. Hmmm, by “tools” I wonder if they mean “machine gun.”

No one who watched some of the best legal minds of a generation labor for the Bush administration to create legal justification for torture should be surprised that the Pentagon can hire ethicists and philosophers to determine under what circumstances a robot may commit murder. Paging Dr. Mengele…

Here are the three laws David Swanson posits will replace Asimov’s:

1. A Pentagon robot must kill and injure human beings as ordered.

2. A Pentagon robot must obey all orders, except where such orders result from human weakness and conflict with the mission to kill and injure.

3. A Pentagon robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Behaving in much the same manner as some of our all-too-human military today, to say nothing of SWAT-gear hungry cops, those Barney Fifes in military drag making up for their dateless high school weekends and various manhood inadequacies by pointing loaded rifles at unarmed civilians to express their inchoate rage.

As anyone not living in a cave knows full well, the foreign-policy of this country, as conducted by the neocons who staged a silent coup to control it (and control it yet despite the nominal change in political administration), operates in a conscience free zone. So perhaps Elon Musk is correct to be worried about artificial intelligence, or more precisely, the lack of ethics that guides its technological development. Our culture has technology in spades. What it lacks is a moral dimension other than materialism and the quest to power to inform its use.

Thus no one should be surprised by developments like these technological fruits, or their subornation to the worst uses imaginable. In a manner analogous to that in MRAPS, SWAT equipment, LRADs and other excess military equipment helpfully provisioned by the Defense Logistics Agency and transferred to local cops, so too are the military populace suppression techniques. Thus the police becomes an armed militia whose sole purpose is to protect the property of the .1% and to keep the rabble in line, as we have seen repeated from Oakland to ferguson to New York City.

Clearly Big Dog and Atlas are just two projects in the robot pipeline, and these are the most visible and showy. For every ostensible “humanitarian use,” there are dozens of less humanitarian uses that don’t make the press releases.

What about the less sexy projects, the smart computers that control systems, that will make decisions based on whatever parameters are fed into it by the best hired “ethicists and philosophers” that Pentagon money can buy? Perhaps that’s what’s keeping Elon Musk up at night. What could be next: Machine-animal hybrids?

Or, other the other hand, nothing to worry about, citizen. Pass the Doritos.

***

Surly1 is an administrator and contributing author to Doomstead Diner. He is the author of numerous rants, articles and spittle-flecked invective on this site, and has been active in the Occupy movement. He shares a home in Southeastern Virginia with Contrary, and every day remarks at his undeserved good fortune at having such a redoubtable woman in his life.