Annenberg Public Policy Center

The 2016 US election was a powerful reminder that beliefs tend to come in packages: socialized medicine is bad, gun ownership is a fundamental right, and climate change is a myth — or the other way around.

Stances that may seem unrelated can cluster because they have become powerful symbols of membership of a group, says Dan Kahan, who teaches law and psychology at Yale Law School in New Haven, Connecticut. And the need to keep believing can further distort people’s perceptions and their evaluation of evidence.

Here, Kahan tells Nature about the real-world consequences of group affinity and cognitive bias, and about research that may point to remedies. This interview has been edited for length and clarity.

Why do people join a given group?

One measure is how individualistic or communitarian people are, and how egalitarian or hierarchical.

Hierarchical and individualistic people tend to have confidence in markets and industry: those represent human ingenuity and power. People who are egalitarian and communitarian are suspicious of markets and industry. They see them as responsible for social disparity.

It’s natural to see things you consider honourable as good for society, and things that are base, as bad. Such associations will motivate people’s assessment of evidence.

Can you give an example?

In a study, we showed people data from gun-control experiments and varied the results1. People who were high in numeracy always saw when a study supported their view. If it didn’t support their view, they didn’t notice — or argued their way out of it.

We’ve done studies where we show people a video of a protest, and when we say it’s protesters outside an abortion clinic, they’ll say they see people pushing each other, blocking an entrance2. The same people would say it was a non-violent protest if we said it was a military recruitment centre where people were protesting against the exclusion of gays and lesbians in the military, when that was an issue.

How does one break through these biases?

It’s not always possible. The key is to find occasion for engaging these issues that’s removed from the normal, political one. Southeast Florida is a good example. People there are polarized about climate change, but there is active engagement — in the Southeast Florida Regional Climate Change Compact — to adapt and protect the region.

They work hard to shut out the style of debate that is characteristic of the national climate-change issue. And whether people believe in climate change or not, when they’re involved in this local process — are they going to move historic route A1A inland, are they going to build a sea wall? — no one feels in this state of group competition.

Are there other promising examples?

One of my collaborators, John Gastil [at Pennsylvania State University in University Park], has done studies on deliberation during electoral contests — say, over referenda on marijuana legalization or GM foods. He asks a small, diverse group to come up with information people need to sensibly make up their minds. The group ends up forming an identity that’s distinct from what members outside it have.

Gastil also shows that when people are apprised of how a group reasoned, the effect can be leveraged — people feel represented by these panels.

We would need something much bigger to attack issues like climate change. But it shows that people can move into some other role than just being a member of these affinity groups. Other studies show that an effective way to overcome group conflict is to have people on both sides engage, but they must first identify the strongest arguments from the other side3.

People tend to believe that the depth and uniformity of commitment to a position held by their group is deeper than it is. If you give people licence to make the counter-argument, others in their group will observe that it’s not as hazardous as they thought to express it. The other side can see ‘OK, they actually know what I’m talking about’. Those kinds of interactions can lead to convergence, to people starting to reason outside of the mindset of ‘What team am I on?’.

We’ve also found that people who are science-curious, a different measure than science-literate, are less likely to become polarized4. Maybe those people could be leveraged to be carriers of ideas within their cultural groups.

What about changing the framing of arguments to fit someone’s world views?

We did a study to see if framing would work. We had people read a climate-change study, and people polarized on it. And when we had people first read about a call for greater carbon-emission limits, opinions polarized more. But if they first read about geoengineering, opinions were less polarized5. People with positive associations with commerce and industry could see a more congenial climate-change narrative: it wasn’t ‘Game over’. It was ‘Yes, we can’.

There probably are things you can do in the world akin to that. But the idea that you can just change words and make a difference is probably too optimistic.

You’ve got to show people that individuals who they respect, who they believe know what they’re talking about, are evincing confidence in the science through actions and words.

Scientists will have to testify to the US Congress and President Trump’s team. How can they get science across?

I don’t think that members of Congress don’t understand the science. The problem is that having a position is associated with being on a team.

Members of Congress engage with issues as professional politicians. What you can tell them is their constituents have a different understanding of this than they used to.