Summary: Moral reasoning is just a post hoc search for reasons to justify the judgments people have already made. When people are asked why, for certain questions, they find things morally wrong, they say they cannot think of a reason but they still think it is wrong. This has been verified by numerous studies. Moral reasoning evolved as a skill to further social cohesiveness and to further our social agendas. Even in different cultures, those with matching socioeconomic levels have the same moral reasoning. Morality cannot be entirely constructed by children based on their own understanding of harm. Thus, cultural learning must play a bigger role than the rationalists had given it. Larger and more complex brains also show more cognitive sophistication in making choices and judgments, confirming a theory of mine that larger brains are the cause of making correct choices as well as making moral judgments.

The evolution of morality is a much-debated subject in the field of evolutionary psychology. Is it, as the nativists say, innate? Or is it as the empiricists say, learned? Empiricists, better known as Blank Slatists, believe that we are born with a ‘blank slate’ and thus acquire our behaviors through culture and experience. In 1987 when John Haidt was studying moral psychology (now known as evolutionary psychology), moral psychology was focused on the third answer: rationalism. Rationalism dictates that children learn morality through social learning and interacting with other children to learn right from wrong.

Developmental psychologist Jean Piaget focused on the type of mistakes that children would make when seeing water moved from different shape glasses. He would, for example, put water into the same size glasses and ask children which one had more water. They all said they held the same amount of water. He then poured water from one glass into a taller glass and then asked the children which glass held more water. Children aged 6 and 7 say that the water level changed since the water was now in a taller glass. The children don’t understand that just because the water was moved to a taller glass doesn’t mean that there is now more water in the glass. Even when parents attempt to explain to their children why there is the same amount of water in the glass, they don’t understand it because they are not ready cognitively. It’s only when they reach an age and cognitive stage that they are ready to understand that the water level doesn’t change, just by playing around with cups of water themselves.

Basically, the understanding of the conservation of volume isn’t innate, nor is it learned by parents. Children figure it out for themselves only when their minds are cognitively ready and they are given the right experiences.

Piaget then applied his rules from the water experiment with the development of children’s morality. He played a marble game with them where he would break the rules and play dumb. The children the responded to his mistakes, correcting him, showing that they had the ability to settle disputes and respect and change rules. The growing knowledge progressed as children’s cognitive abilities matured.

Thus, Piaget argued that like children’s understanding of water conservation is like children’s understanding of morality. He concludes that children’s reasoning is self-constructed. You can’t teach 3-year-old children the concept of fairness or water conservation, no matter how hard you try. They will figure it out on their own through dispute and do things themselves, better than any parent could teach them, Piaget argued.

Piaget’s insights were then expanded by Lawrence Kohlberg who revolutionized the field of moral psychology with two innovations: developing a set of moral dilemmas that were presented to children of various ages. One example given was that a man broke into a drug store to steal medication for his ill wife. Is that a morally wrong act? Kohlberg wasn’t interested in whether the children said yes or no, but rather, their reasoning they gave when explaining their answers.

Kohlberg found a six-stage progression in children’s reasoning of the social world that matched up with what Piaget observed in children’s reasoning about the physical world. Young children judged right and wrong, for instance, on whether or not a child was punished for their actions, since if they were punished for their actions by an adult then they must be wrong. Kohlberg then called the first two stages the “pre-conventional level of moral judgment”, which corresponded to Piaget’s stage at which children judge the physical world by superficial features.

During elementary school, most children move on from the pre-conventional level and understand and manipulate rules and social conventions. Kids in this stage care more about social conformity, hardly ever questioning authority.

Kohlberg then discovered that after puberty, which is right when Piaget found that children had become capable of abstract thought, he found that some children begin to think for themselves about the nature of authority, the meaning of justice and the reasoning behind rules and laws. Kohlberg considered children “‘moral philosophers’ who are trying to work out coherent ethical systems for themselves”, which was the rationalist reasoning at the time behind morality. Kohlberg’s most influential finding was that the children who were more morally advanced frequently were those who had more opportunities for role-taking, putting themselves into another person’s shoes and attempting to feel how the other feels through their perspective.

We can see how Kohlberg and Piaget’s work can be used to support and egalitarian and leftist, individualistic worldview.

Kohlberg’s student, Elliot Turiel, then came along. He developed a technique to test for moral reasoning that doesn’t require verbal skill. His innovation was to tell children stories about children who break rules and then give them a series of yes or no questions. Turiel discovered that children as young as five normally say that the child was wrong to break the rule, but it would be fine if the teacher gave the child permission, or occurred in another school with no such rule.

But when children were asked about actions that harmed people, they were given a different set of responses. They were asked if a girl pushes a boy off of a swing because she wants to use it, is that OK? Nearly all of the children said that it was wrong, even when they were told that a teacher said it was fine; even if this occurred in a school with no such rule. Thus, Turiel concluded, children recognize that rules that prevent harm are moral rules related to “justice, rights, and welfare pertaining to how people ought to relate to one another” (Haidt, 2012, pg. 11). All though children can’t speak like moral philosophers, they were busy sorting information in a sophisticated way. Turiel realized that was the foundation of all moral development.

There are many rules and social conventions that have no moral reasoning behind them. For instance, the numerous laws of the Jews in the Old Testament in regards to eating or touching the swarming insects of the earth, to many Christians and Jews who believe that cleanliness is next to Godliness, to Westerners who believe that food and sex have a moral significance. If Piaget is right then why do so many Westerners moralize actions that don’t harm people?

Due to this, it is argued that there must be more to moral development than children constructing roles as they take the perspectives of others and feel their pain. There MUST be something beyond rationalism (Haidt, 2012, pg. 16).

Richard Shweder then came along and offered the idea that all societies must resolve a small set of questions about how to order society with the most important being how to balance the needs of the individual and group (Haidt, 2012, pg. 17).

Most societies choose a sociocentric, or collectivist model while individualistic societies choose a more individualist model. There is a direct relationship between consanguinity rates, IQ, and genetic similarity and whether or not a society is collectivist or individualistic.

Shweder thought that the concepts developed by Kohlberg and Turiel were made by and for those from individualistic societies. He doubted that the same results would occur in Orissa where morality was sociocentric and there was no line separating moral rules from social conventions. Shweder and two collaborators came up with 39 short stories in which someone does something that would violate a commonly held rule in the US or Orissa. They interviewed 180 children ranging from age 5 to 13 and 60 adults from Chicago and a matched sample of Brahmin children and adults from Orissa along with 120 people from lower Indian castes (Haidt, 2012, pg. 17).

In Chicago, Shweder found very little evidence for socially conventional thinking. Plenty of stories said that no harm or injustice occurred, and Americans said that those instances were fine. Basically, if something doesn’t protect an individual from harm, then it can’t be morally justified, which makes just a social convention.

Though Turiel wrote a long rebuttal essay to Shweder pointing out that most of the study that Shweder and his two collaborators proposed to the sample were trick questions. He brought up how, for instance, that in India eating fish is will stimulate a person’s sexual appetite and is thus forbidden to eat, with a widow eating hot foods she will be more likely to have sex, which would anger the spirit of her dead husband and prevent her from reincarnating on a higher plane. Turiel then argued that if you take into account the ‘informational assumptions’ about the way the world works, most of Shweder’s stories were really moral violations to the Indians, harming people in ways that Americans couldn’t see (Haidt, 2012, pg. 20).

Jonathan Haidt then traveled to Brazil to test which force was stronger: gut feelings about important cultural norms or reasoning about harmlessness. Haidt and one of his colleagues worked for two weeks to translate Haidt’s short stories to Portuguese, which he called ‘Harmless Taboo Violations’.

Haidt then returned to Philadelphia and trained his own team of interviewers and supervised the data collection for the four subjects in Philadelphia. He used three cities, using two levels of social class (high and low) and within each social class was two groups of children aged 10 to 12 and adults aged 18 to 28.

Haidt found that the harmless taboo stories could not be attributed to some way about the way he posed the questions or trained his interviewers, since he used two questions directly from Turiel’s experiment and found the same exact conclusions. Upper-class Brazilians looked like Americans on these stories (I would assume since Upper-class Brazilians have more European ancestry). Though in one example about breaking the dress-code of a school and wearing normal clothes, most middle-class children thought that it was morally wrong to do this. The pattern supported Shweder showing that the size of the moral-conventional distinction varied across cultural groups (Haidt, 2012, pg. 25).

The second thing that Haidt found was that people responded to harmless taboo stories just as Shweder predicted: upper-class Philadelphians judged them to be violations of social conventions while lower-class Brazilians judged them to be moral violations. Basically, well-educated people in all of the areas Haidt tested were more similar to each other in their response to harmless taboo stories than to their lower-class neighbors.

Haidt’s third finding was all differences stayed even when controlling for perceptions of harm. That is, he included a probe question at the end of each story asking: “Do you think anyone was harmed by what [the person in the story] did?” If Shweder’s findings were caused by perceptions of hidden victims, as was proposed by Turiel, then Haidt’s cross-cultural differences should have disappeared when he removed the subjects who said yes to the aforementioned question. But when he filtered out those who said yes, he found that the cultural differences got BIGGER, not smaller. This ended up being very strong evidence for Shweder’s claim that morality goes beyond harm. Most of Haidt’s subjects said that the taboos that were harmless were universally wrong, even though they harmed nobody.

Shweder had won the debate. Turiel’s findings had been replicated by Haidt using Turiel’s methods showing that the methods worked on people like himself, educated Westerners who grew up in an individualistic culture. He showed that morality varied across cultures and that for most people, morality extended beyond the issues of harm and fairness.

It was hard, Haidt argued, for a rationalist to explain these findings. How could children self-construct moral knowledge from disgust and disrespect from their private analyses of harmlessness (Haidt, 2012, pg. 26)? There must be other sources of moral knowledge, such as cultural learning, or innate moral intuitions about disgust and disrespect which Haidt argued years later.

Yet, surprises were found in the data. Haidt had written the stories carefully to remove all conceivable harm to other people. But, in 38 percent of the 1620 times people heard the harmless offensive story, they said that somebody was harmed.

Haidt found that it was obvious in his sample of Philadelphians that it was obvious that the subjects had invented post hoc fabrications. People normally condemned the action very quickly, but didn’t need a long time to decide what they thought, as well as taking a long time to think up a victim in the story.

He also taught his interviewers to correct people when they made claims that contradicted the story. Even when the subjects realized that the victim they constructed in their head was fake, they still refused to say that the act was fine. They, instead, continued to search for other victims. They just could not think of a reason why it was wrong, even though they intuitively knew it was wrong (Haidt, 2012, pg. 29).

The subjects were reasoning, but they weren’t reasoning in search for moral truth. They were reasoning in support of their emotional reactions. Haidt had found evidence for philosopher David Hume’s claim that moral reasoning was often a servant of moral emotions. Hume wrote in 1739: “reason is, and ought to be only the slave of the passions, and can never pretend to any other office than to serve and obey them.”

Judgment and justification are separate processes. Moral reasoning is just a post hoc search for reasons to justify the judgments people have already made.

The two most common answers of where morality came from are that it’s innate (nativists) or comes from childhood learning (empiricists), also known as “social learning theory”. Though the empiricist position is incorrect.

The moral domain varies by culture. It is unusually narrow in western education and individualistic cultures. Sociocentric cultures broaden moral domain to encompass and regulate more aspects of life.

People sometimes have gut feelings – particularly about disgust – that can drive their reasoning. Moral reasoning is sometimes a post hoc fabrication.

Morality can’t be entirely self-constructed by children based on their understanding of harm. Cultural learning (social learning theory, Rushton, 1981) not guidance must play a larger role than rationalist had given it.

(Haidt, 2012, pg 30 to 31)

If morality doesn’t come primarily from reasoning, then that leaves a combination of innateness and social learning. Basically, intuitions come first, strategic reasoning second.

If you think that moral reasoning is something we do to figure out truth, you’ll be constantly frustrated by how foolish, biased, and illogical people become when they disagree with you. But if you think about moral reasoning as a skill we humans evolved to further our social agendas – to justify our own actions and to defend the teams we belong to – then things will make a lot more sense. Keep your eye on the intuitions, and don’t take people’s moral arguments at face value. They’re mostly post hoc constructions made up on the fly crafted to advance one or more strategic objectives (Haidt, 2012, pg XX to XXI).

Haidt also writes on page 50:

As brains get larger and more complex, animals begin to show more cognitive sophistication – choices (such as where to forage today, or when to fly south) and judgments (such as whether a subordinate chimpanzee showed proper differential behavior). But in all cases, the basic psychology is pattern matching. … It’s the sort of rapid, automatic and effortless processing that drives our perceptions in the Muller-Lyer Illusion. You can’t choose whether or not to see the illusion, you’re just “seeing that” one line is longer than the other. Margolis also called this kind of thinking “intuitive”.

This shows that moral reasoning came about due to a bigger brain and that the choices and judgments we make evolved because they better ensured our fitness, not due to ethics.

Moral reasoning evolved for us to increase our fitness on this earth. The field of ethics justifies what benefits group and kin selection with minimal harm to the individual. That is, the explanations people make through moral reasoning are just post hoc searches for people to justify their gut feelings, which they cannot think of a reason why they have them.

Source: The Righteous Mind: Why Good People Are Divided By Politics and Religion