Morality of AI

Morality is a philosophical question which has been debated among philosophers since long time. But not only philosophers morality is a question which is considered by everyone (may be not everyone). There are many theories devised by people like philosophers, religious leaders, etc. to solve moral dilemmas by taking many factors into consideration like consequences, motivations, social and cultural rules.

Another area of this question is the moral development. The question, how does someone develop the ability of moral judgment? And also what influences the moral compass of a person? Numerous studies have been done by various researchers in this field and have come up with a number of theories and models.

Ok, that’s about humans what about AI? Where will the moral lows of AI come from? Can we hard code rules on them? What if they came to a stage where they can re program their rules? How we choose between good and bad? Is it passion/intuition or reasoning? Will the development of morality in AI be like the moral development of humans? In this article I try to create a discussion on these matters.

Moral Psychology

Anne Colby and William Damon suggested that one’s moral identity is formed through that individual’s synchronization of their personal and moral goals. They called it maintaining a “unity between self and morality”. Colby and Damon suggested that an individual’s behavior and actions can be considered to be morally exemplary by their communities and those with whom they came in contact.

Morality as virtues which is an ethical theory which is used since Socrates, describe that the morality if a person depends on the values in the character of that person. Lapsley and Narvaez in their papers suggest that our moral values are controlled by a set of cognitive structures that organize related concepts and integrated past events in our mind. And these cognitive structures or schemas help us in solving moral dilemmas. These schemas evolve through knowledge and past experiences.

Jonathan Haidt proposed a model called Social Intuitionist Model which claimed that moral judgment made based on socially-derived intuitions. This model suggests that moral reasoning is largely post-hoc rationalizations that function to justify one’s instinctual reactions. But this idea was contrasted by some of the studies done by other research. Most like Augusto Blasi believe that moral intuition happens, but not in every case.

Moral Development

Moral development is a major topic in both psychology and education. One of the best known theories in this field is proposed by psychologist Lawrence Kohlberg who can be considered as the central figure in moral psychology. He uses Jean Piaget’s work to as the base for his theory in the development of moral reasoning.

According to Piaget moral development happens in two stages. But Kohlberg explained this in six stages within three different levels. Kohlberg also stated that moral development is a continual process that occurs throughout the lifespan.

To come up with his theory Kohlberg did a research involving young children. He interviewed with groups of young children and presented a series of moral dilemmas in order to determine the reasoning behind their judgments of each scenario. One of these dilemmas is ‘The Heinz Dilemma’.

“In Europe, a woman was near death from a special kind of cancer. There was one drug that the doctors thought might save her. It was a form of radium that a druggist in the same town had recently discovered. The drug was expensive to make, but the druggist was charging ten times what the drug cost him to make. He paid $200 for the radium and charged $2,000 for a small dose of the drug. The sick woman’s husband, Heinz, went to everyone he knew to borrow the money, but he could only get together about $ 1,000 which is half of what it cost. He told the druggist that his wife was dying and asked him to sell it cheaper or let him pay later. But the druggist said: ‘No, I discovered the drug and I’m going to make money from it. ‘So Heinz got desperate and broke into the man’s store to steal the drug-for his wife.Should the husband have done that?’ ”(Kohlberg,1963).

Kohlberg classified the responses into various stages of reasoning in his theory of moral development.Kohlberg show that children start from stage one, where they start to recognize higher authorities, rules and punishments for breaking those rules, to stage six,where they understand good principles make a better society and they also define which rules are fair and which are not. A person is considered more cognitively mature depending on that person’s stage of moral reasoning. Kohlberg found through empirical evidence that a person’s stages of moral reasoning will grow as they grow in both education and worldly experience.

So what about AI?

As you can see humans have many theories about the morality, how do we distinguish between right and wrong and why we should do right. But will the Machines or AI have same moral laws? Will they have the same reasons as humans to do right?

To these questions of morality, of the machines most of you will give a simple answer. Humans can program the moral rules into machines (like Asimov’s robots) which is sort of a deontological way to morality. But what if they came to a stage that has the ability to override them? What method will they use choose to distinguish between right and wrong behavior?

There can be a situation that machines will form an ethical theory similar to the divine command theory. They may choose actions according to the advice of humans. But what would make them follow these rules will be a question? Will it be the fear of punishment from humans? The machines won’t exactly have a fear of going into hell or punished by a divine after death. And the only fear they will have would be (as I mentioned earlier) about the humans. But it will be different than the relationship between human and god since humans aren’t all powerful and invisible like the god. So if they are to choose to do good things they must have a strong reason for it. What will stop them from harming humans or each other?

It seems that we divide the moral judgments in two ways (other than Deontological way), the motivation or the intention or the consequence or the result. Someone can do something in good motive and end up with bad consequences. Nevertheless, someone can do something with bad motive, but that action can have good consequences. When we talk about motivation in the point AI’s of view, it seems that AI may have the same motives as of humans. Humans’ motives can be the fear of punishment, sense of duty, the benefit of this action to others (his community, family or group) or may be another reward he gets out of it. AI could have the same motives. But AI must be a sentient to feel the fear or the pleasure of a reward (they must have some desires). And another thing is that the motives of AI can be all about the benefits for themselves (Ethical Egoism) rather than the benefits of others since they don’t have a reason to think about others. The social behaviors of humans (or thinking about others) can be considered as a quality that humans develop through years of evolutions s as a help foe survival. So we cannot expect that AI will have the same quality. But maybe can change since AI can have these qualities transferred to them by humans when they are living (and grow) among humans (and sometimes it is a question that does even humans think of others without considering about benefits their own selves).

When we talk about the consequences it is more helpful in judging an action after it is done than before, since the consequences can be seen after the action. If a human (or AI) think about the consequence of an action before committing the action it can consider as motivation.

Virtue ethics tell that morality is about the values in the character of a person. As explained by researchers like Lapsley and Narvaez the cognitive structures in a person’s mind contributes to his or hers moral judgments. This theory can very well be applied to AI. The mental structures of the AI can create a representation of character based on the influence and knowledge received from the environment. And these cognitive structures of the AI will be able to guide the AI in its actions. This idea is also can run parallel with the Ego Identity theory of Erik Erikson. Just like human’s Ego Identity which is the conscious sense of self which humans develop through life of social interactions, the AI can have an Ego identity that can combine with the cognitive schemas which control moral judgments.

Another idea of morality which is believed about some people is that doesn’t have to do with rationality. Sometimes humans have an intuitive need or a passion (love or empathy)to do good things. Like doing a good deed makes us happy and ding a bad deed makes us uneasy and unhappy. (This is similar to situation in moral lows of god, the good deeds gives rewords and they make us happy and bad deeds give punishments and that makes us unhappy. But this intuitive feeling doesn’t have a conscious reasoning about rewards and punishments). This is similar to the philosophical argument of David Hume about morality. He believed that humans choose actions based on passion rather than reason. This is also compatible to the Jonathan Haidt’s Social Intuitionist Model which claims that moral judgments are made based upon socially-derived intuitions. This could be very well applied to the AI. The architecture of the AI can be developed with the ability to generate these intuitions and also able to generate feelings like fear or happiness. And the next question is, what will these intuitions be? If the architecture of the AI is similar to humans, then these kinds of intuitions will be generated through the social interactions. And they will depend on the environment which the AI is part of.

So the moral development of AI can happen like the moral development of humans. An also that experience and knowledge gained from the society can influence the moral judgments of AI. But the process of gathering social influences of AI can have significant differences from process of humans since their relationship is different. We cannot expect that humans will treat AI, in the same way as they treat each other. And also the way that the AI sees the world and the emotions (generated by desires) that AI possess can be different than Humans. So the reasoning of the AI or the virtues of the AI can be unique to them (but this doesn’t tell that they are necessarily harmful to humans).

Another factor in the moral development of AI is the purpose we give them. Additional parts can be added to the AI to use them in accomplishing the task they are made to do. And this additional components or capabilities can bias the moral judgment of that specific AI. And also the purpose can make them exposed into a certain kind of environment which can influence the process of creating the character or the Ego identity. So this can end up with a negative effect since the moral lows of AI in specific environment will not be applicable when they are in another environment. And this can even be harmful to humans (military research???).

So we can see the morality of a sentient AI will be dependent on their architecture, their view of the world, their environment and their purpose. So it is hard to guess the reactions of an AI in the face of a moral dilemma. But we can say that it is strongly combined with the human morality since we are the ones creating the architecture, purpose and the environment. So we can come to (sort of) a conclusion that if we have good intentions and good actions AI will also be good (that is a bit scary).