Distributed Ethics for the Modern Robot

How Ethereum Will Keep Robots From Killing Us All!

Written by David Vandervort

Stop me if you’ve heard this one: Robots are going to kill us all (or at least take all our jobs). As soon as artificial intelligence gets smart enough, we’re all done for. Just wait and see. Personally, I think this is a little pessimistic but, just in case, I like to hedge my bets. One of today’s cutting edge technologies, the Ethereum blockchain, is a perfect tool for hedging. I’ll explain that but first I have to explain about robots and ethics.



Back in the days when I worked in factories, there was a common saying, that there was never enough time to do a job right but there was always time to do it over. That might work okay with stick-on labels (I made those for a few years) or clock radios (no, I never made those) but some things are too important to allow them to be built with ordinary human carelessness. Germ warfare comes to mind. Never let the test subjects for the experimental zombie virus escape the lab. Seriously. Don’t.



Robots are one of those things where we might not get a do-over. I don’t mean the robots we have today that weld parts onto a car, or that amplify a surgeon’s abilities far beyond what anyone can do with merely human hands and eyes. Marvels that they are, those technologies don’t come close to the capabilities of the things that are coming in the not too distant future. Think about self-driving cars. While they are not yet ready to be your designated driver next New Year’s Eve, there are so many companies, investing so much money and talent on them, that it’s only a matter of time.



Ignore the much discussed “trolley problem” (Look here: https://en.wikipedia.org/wiki/Trolley_problem if you are not familiar with this one. But it comes up so much, how did you miss it?). The big problem with a self-driving car is that it is (or will be) a robot that you trust with your life every day. The question is, if the car runs over the neighbor’s dog, will it blame you? Or if you use it as a getaway vehicle from a bank heist, will it take you straight to the police? Or will it take your family hostage the next time they go to the mall and use them to blackmail you into buying it “upgrades?”



Think this stuff sounds far-fetched? Why? What does a machine know of loyalty or friendship or right and wrong? Even if most self-driving cars, housekeeping robots, robot doctors and whatever else we make behave appropriately, once there are a few million of them in circulation, even a tiny error rate will cause a lot of harm. Fortunately, there is an active and growing field of study called AI Safety (also roboethics, AI security and a few other things) in which people study ways to teach robot minds how to understand and apply ethical principles. Smart people are working on the problem.



As a software developer, I worry about how even the most brilliantly conceived roboethics methods will be implemented. Theory and practice don’t generally line up. In theory, almost all web site hacks can be prevented. In reality, bringing the people who have the knowledge, training and skill together with the time, tools and discipline to do the work properly, is very hard. Is it really likely that programming ethical robots is easier than securing a website?



So assume that no robot will have perfect (from a human standpoint) ethical judgment, no matter how brilliant the theory behind it. Some robots may malfunction. Some may drift from an ideal state over time. Worse, we might not know what is happening until it is too late. Think of what would happen if a self-driving car suddenly started driving down the wrong side of a busy freeway. Dozens of people might be injured or killed. The car may only stop when police hit it with an electromagnetic pulse, frying its circuits. Unfortunately, the EMP would also damage the data in the black box, also destroying our chance to find out why it went wrong.

This is where the blockchain comes in. How about we implement a Robot Ethics Blockchain (REB)? It would work something like this: Every robot in the world has an ethics engine that evaluates its environment and its decisions for their ethical dimensions. It creates messages describing these dimensions and sends the messages to the network that is made up of all the robots in the world, including the ones humans maintain just to monitor the rest of them (We will NOT call those the secret robot police).

Bitcoin-like, Ethereum-like, ethics messages are grouped in blocks, and blocks are added to the blockchain. Every robot and computer with enough processing power and storage space keeps a copy of the whole chain. This gives us the advantages of any blockchain. It makes the ethical calculus of machines public, allowing people to discover how well the results of all this roboethics research works. Rather than waiting for six months to take your robot in for its 10,000 mile checkup, its ethical history is written to the REB in very close to real time. We might even be able to catch problems with individual units the moment they start to drift off the straight and narrow, BEFORE they run amok.



Once data is added to a blockchain, it stays there, unchanged, forever. Companies that sell machines with defective ethics will find this inconvenient. There will be no way for them to alter or hide the evidence. This is a better guarantee than we currently get with food or cars. Here is what I think is one of the greatest strengths of this system: It doesn’t just have to passively record ethical decisions. Not every ethical decision is on the level of driving on the wrong side of the road or setting fire to the house. They do not always play out in fractions of a second. What if some robot faces the question of whether to turn in its owner for a possible crime, or to ask the owner about it, instead? Or to search for more evidence on its own? This is the kind of problem where the REB could be used as a sounding board. The robot could send several scenarios to the network and ask for advice on what to do.



Advice from good people (and robots) is a great way of clarifying ethical dilemmas and working through them. This may be the best aspect of the system, because it breaks ethical considerations out of the solitary confines of the robot’s mind and puts it out where it can be addressed by multiple algorithms, thought processes and perspectives. It fights back against the otherwise sinister mystery of why robots do the things they (will someday) do. It may even help make them part of a community, assuming that along with intelligence and ethics, someone figures out how to give them a sense of community.



There are some issues to be worked out of course, such as how to write ethical evaluations and decisions into something smaller than an encyclopedia. One of the weaknesses of most current blockchain technology is that a busy chain takes up a lot of space and never stops growing. The REB, being fed by possibly millions of robots, could become unmanageable in short order. There is also an important question of privacy. The person described before, whose robot was thinking of turning him in to the police, would not appreciate having all the details of his life spewed across a public blockchain. Yet we still want to be able to use this system to find malfunctioning units.



I view these as implementation details. Engineering problems rather than questions in need of some great scientific breakthrough. Let’s save the breakthroughs for making the leap from amazingly complicated machines, to truly intelligent ones, with strong ethics helpful to humanity, continuously updating a Robot Ethics Blockchain for all to see.