Note: I am not against automation, I think AI should handle as many task as possible. This article is about the correct way the Self-driving cars should handle morally difficult situation, or we will face a dystopia and destruction of our moral compass.

It’s a well-known problem facing AI researchers in general and self-driving cars in particular is the choice that the AI has to make when facing moral dilemmas. For example: let us assume that one of the self-driving cars lost control over its breaks and find itself speeding toward two pedestrians, one of them is a pregnant woman and the other is a man who has family . “Luckily”, the car doesn’t have to hit both of them, it can choose to hit one and save the other, but the hard choice is to decide which one to hit? Of course the car can’t decide by itself, so the engineers have to program it beforehand. So how we should program the self-driving cars to act in these situations? Questions like this put our moral theories, our ability to practice them and our sense of responsibility at test. We must be very careful when choosing the answer because the wrong answer can cause a lasting damage on our ability to distinguish between what’s right and what’s wrong, and therefore negatively affect our laws and ethics.

When we are trying to decide what self-driving cars should do in situations like the example above, we must submit to some moral theory and based on that theory we can determine what is moral and what is not, however, some AI researchers decided to not concern themselves with any moral philosophy, and to put the decision of how self-driving cars should behave in the hands of the masses, so they conduct some kind of surveys asking people to vote on what the self-driving cars should do when faced with a moral dilemma. After that, researchers count the votes and the AI should pick the choice that the majority of voters selected. I think this is a big mistake for two reasons: first, knowledge doesn’t get acquired by voting, we don’t know that the earth is not flat because the majority of its population decided to believe so. You might think that this is a lousy analogy but I think moral judgment is some sort of knowledge. Anyway, I am not going to focus on this point because I want to focus on the second reason why this approach is bad idea, and that will be the main argument behind this theses.

Back again to the moral theories, many philosophers submit to the theory of utilitarianism (maximize happiness and well-being for the majority of a population), and many AI researchers and engineers follow them and decided that the correct way the self-driving cars should resolve any morally difficult situation is by choosing the least of two evils, for example: the car should choose to hit one individual instead of a group of people, and to hit a criminal instead of decent law-obeying citizen.

At first, this appears to makes sense. Everyone agrees that suffering is bad, and we should reduce the amount of suffering in the world as much as we can. Think about it, you would choose to feel moderate amount of physical pain, let say: paper cut, over a severe pain like a broken bone.

But on the other hand you may feel like it is harsh and cold to sacrifice a few people to save many, I think this intuitive sense of morality is true. I am going to use the principles of utilitarianism to prove that it’s morally wrong to use this shallow and naïve definition of utilitarianism, as what some AI engineers plan to do. Not only that but I think they should be forbidden to do so by the law.

When I say naïve definition of utilitarianism I am referring to the act of putting the interest of the many over the interest of the few, like sacrificing one life to save two, without taking into account the impact of that decision on the long run and its effect on the collective mind of the society.

There is a famous but unsolved dilemma of a train about to hit group of five people, however you can pull a lever next to the railway and change the train course to hit one person instead (The trolley problem). Now, a typical choice based on utilitarianism is to choose to hit the one person by pulling the lever. After all, the suffering of one family is lesser evil than the suffering of five families.

But I would argue that by sacrificing one person to save five we would actually be sacrificing the entire society, because sacrificing few people to save many send a bad message and set a bad example that human sacrificing is acceptable. After all the idea of killing one man to save many is basically sacrificing him, isn’t it so?

I do think that utilitarianism should be about the big picture, meaning that when we try to choose the lesser of two evils we should take into account how our choice will affect the entire world in the long run. And a world where humans are expendables and treated like assets where two humans are more valuable than one, is scary and cruel world and I don’t want to live in it and nether should you.

My argument is based on sending messages and setting examples for society, meaning that what society agrees that it’s good will keep growing more and more because people will start to see it as normal behavior, on the other hand if society condemned certain behavior then there is a chance that it will going to shrink or at least not growing. For example: using physical violence to discipline children will make it more likely for them to commit violent crimes [1] [2], because by beating them you sub communicate that violence is acceptable.

If we follow the principles of naïve utilitarianism, then we must conclude that when faced with situations where we had to choose between the lives of two people and it happened that one of them is more useful than the other then we should save the more useful one. For example: we must let the self-driving car to hit a simple employee to save a CEO of a large corporation because the number of people will suffer from the CEO’s death will be greater than the number of people will suffer from the death of one employee. Now you maybe start to see the flaws in the naïve utilitarianism. We basically based the value of human life on their usefulness, because by definition the more useful you are the more people will be affected (or suffer) from your death.

This is indeed a very cruel moral theory and it defeats itself, because by sacrificing the less useful between two people we will set a very bad example that human life is not sacred but rather something you can quantify and measure, thus creating an unhealthy society were life is less valuable than it used to be. And therefore we will be creating a much more suffering in the long run.

If we allow robots (or humans) to prefer the life of some people over others based on their usefulness then I am afraid the next thing is to stop taking care of the elderly and sick people because they consume resources more than what they should be producing. To base the value of human being on their productivity or usefulness is some sort of social Darwinism, survival of the fittest, however in our case is the survival of the useful.

Beside, this kind of thinking is very selfish, you only care about someone’s life because he or she will be useful to you. (If he or she is useful to society then they will be useful to you, even in indirect way, and that’s probably the real reason you care about them).

We don’t want to build a society functioning that way, we want a society that values human life and not treating its members like cogs in a machine where you can get rid of a cog that is less efficient in favor of more efficient one. Think about it like this: if you have children would you base their value on their productivity and usefulness? I don’t have kids but I know that if I do I am not going to love them based on their degree of success.

Beside all that I don’t think that you are willing to sacrifice your life to save your boss because he or she is more useful than you. So why do you think it’s ok to sacrifice other people?

There is a well-known objection to the naïve utilitarianism, it’s as follows: if it’s moral to sacrifice one person to save five, then can we kidnap a random person and steal his organs and give them to five patients who are waiting for organ donation? But if we drop this naïve view the answer will become clear and intuitive: a society where people kidnapping each other and take away their organs is not a healthy society, see how we took into consideration the big picture of the society as a whole.

This brings me back to the second reason why using voting as a moral philosophy is a big mistake, like basing human value on usefulness basing it on voting well send worse message that might makes right, strength are in numbers and if we are going to let the majority decide what’s wrong and what’s right, we practically let the strong decide what’s moral and what’s immoral. Now let’s not mistake this as democracy. Democracy is people choosing their leaders not deciding what’s moral and immoral, you can’t let voters to decide, for example, whether slavery is acceptable or not. So let us not confuse democracy where people rule themselves with the principle of obeying the majority because they said so.

So what is the solution to the problem of the ethics of self-driving cars? One solution I can think of is that the car should pick its “victims” randomly. When present with two hard choices like choosing between the life of one and life of many or life of male and female, the least evil choice is to pick randomly to avoid sending the negative message of quantifying the value of human life.

I must admit that maybe there are better solutions than to randomly select the victim, however, no matter what the solution is someone going to propose, it must take into account its effect on the society or the world as a whole in the long run.

Osama Khader

______________________

[1]https://www.economist.com/charlemagne/2013/07/15/liebe-statt-hiebe

[2]https://www.apa.org/pi/prevent-violence/resources/violent-behavior