It’s common knowledge that an incorporated business entity is effectively recognized as a person in much of Western law. But what you might not know is that the US state of Tennessee recently became the first state to rule that automated driving systems can be considered people . I think this is a smart decision by the Tennessee legislators — they’ve made the first in what will become a blizzard of similar moves everywhere as our societies try not to be outrun by technology. The legislators can sense a revolution coming, it seems. While this decision was plainly intended to clear the way for autonomous vehicles, the legislation also managed to create a little splash damage among existing technologies such as those offered by Tesla and others, so if you keep your finger on the pulse of these things, you’re in for an interesting ride.

Fully autonomous vehicles are closer than you might think and might, in fact, already be appearing in your rear-view mirror. So it’s high time we took our collective heads out of the sand and gave the issue of robot ethics some serious airtime. What should an autonomous vehicle do when it has to choose between endangering its passenger or endangering a pedestrian? There are many variations on dilemmas like this. Called trolley problems by academics, they are no longer hypothetical, and are so diverse and ingenious that MIT has collected many of them into an online game. If you’re wondering about this specific question, a popular direction of travel is to parlay an answer along the lines of — Well, how would a human driver judge and respond to the situation? Whoever or whatever is in control, it’s going to be a tough call, and many rational actors would agree that the best thing to do is take the path of minimum injury.

“Your Honor, I know the plaintiff suffered a broken leg, but my only available alternative action would have caused my passenger to have suffered two broken legs.”

Persuaded? I’m not sure I am, but it’s clear that ethical reasoning was involved. Unfortunately, in the real world, there are no hard and fast answers. ‘Minimizing injury’ may represent only one of several equally valid strategies. What about ‘maximizing good’? The two aren’t necessarily the same thing, for any given definition of ‘good’. Someday, we may be able to choose which kind of ethical framework we want installed in our (autonomous) cars, just as easily as we can choose their colors today.

When it comes to the practice of teaching ethics to robots, the experts divide into two camps. The top-downers strive to embody well-proven systems of human ethics — like the Geneva Convention — within a robot’s programming. The bottom-uppers prefer for a robot to learn from deep learning methods in which it self-trains until a workable set of ethics emerges. There are dangers in both approaches — human ethical systems may not be self-consistent enough for robot use, while machine learning methods are notorious for producing often valid but hard-to-explain and inscrutable decisions. Some roboticists even resent the use of the word ‘teaching’ in this context — to them, teaching implies a personal, perhaps sentient connection between teacher and student, far from the presentation, ingestion and repeated manipulation of data common today.

It gets much more complicated. Autonomous vehicles are just the tip of the iceberg. What kind of ethics will need to apply to hybrid human-AI systems, where people work under the direction of an AI? What will happen when autonomous weapons are ready for deployment in the battlespace? And in all of this, is there room to even begin thinking of robot rights, rather than robot ethics?

If there’s something to actually worry about in all of this, it is of course, jobs. Earlier generations of industrial change led to the replacement of old jobs with many more new jobs. This time, it seems that intelligent and autonomous systems are replacing old jobs with many fewer new jobs. As robots get smarter and more capable, fewer and fewer humans will be able to compete — and this raises tricky implications for the future of work and society. And they are ethical implications. The US is already struggling to generate the 150,000 new jobs it needs each month to keep up with population growth alone.

What is the defining characteristic of an autonomous system — or robot — that mandates that it must operate within an ethical framework? It can’t be anything so simplistic as the fact that it may have mechanical interfaces with the world — like self-driving cars. Informally I would suggest that any system that has the potential to have an immediate and serious impact on one or more human lives should be considered a candidate. But systems like this are already among us. What do you think it is that accepts or declines your application for a run-of-the-mill financial product? A business system that makes autonomous decisions based on a number of inputs — like your credit score. Is it behaving ethically when it offers you a loan at a higher rate to mitigate its calculated estimate of you being unable to service the debt? I’m no expert, but I know that systems like this have been with us for decades, and I’m pretty sure that in those far-off smoke-filled rooms, the last thing their designers considered was that their systems would ever assume some kind of moral agency.

Wouldn’t it be nice if in our efforts to prepare for the future, we also manage to identify and fix one or two inequities of the present?

“A revolution is not a bed of roses. A revolution is a struggle between the future and the past”.

Fidel Castro