In online shaming, Jordan saw a clue. “I started thinking about friends I knew who were involved in social justice,” she says. “There was a lot of moralistic speech that seemed like it was focused on communicating one’s own position.” In other words, maybe third-party punishment is primarily a signal that tells onlookers that you are trustworthy, in the same way that a peacock’s tail or stag’s antlers signal its genetic quality. It says: If I’m willing to punish selfishness, you know I’m not going to act selfishly to you.

This only works if punishing is an honest signal of trustworthiness, if those who do it are actually more trustworthy than those who don’t. Jordan argues that this is the case because the same factors that incentivize people to actually be trustworthy also incentivize them to punish others who behave badly. For example, you might be more likely to treat peers well if you interact with them repeatedly (contrast a permanent colleague with a summer intern) or if you belong to an institution that enforces codes of conduct (like the military or religious institutions). In these situations, you also gain more benefits from punishing (because you’re signaling your stance to a large group of long-term peers) and pay fewer costs (since more people have your back).

Together with David Rand, a psychologist who studies cooperation, Jordan tested these ideas by recruiting hundreds of volunteers through Amazon’s Mechanical Turk, and having them play a game of trust in two stages. In phase one, a Helper can decide whether to share money with a Recipient; if they’re selfish about it, a Punisher can decide to penalize them. A Chooser watches all of this. In phase two, they get a pot of money and can invest part of it with the Punisher. That investment gets tripled and the Punisher can decide how much to return to the Chooser. So the Chooser must evaluate how much they trust the Punisher, based on what they did in the first game.

Jordan found that the Choosers sent more money to the Punishers if they actually punished the selfish Helpers. “They treated punishment as a sign that you’re likely to be nice,” she says. And they were right to do so because the Punishers who punished ended up returning more money to the Choosers. They were, indeed, more trustworthy.

Jordan then replayed the experiments with a twist. This time, in phase two, the Choosers played with either the Helpers or the Punishers from phase one. In this set-up, punishing is no longer the only signal of trustworthiness; helping can convey the same information. “We predicted that people should be less inclined to punish if they have the opportunity to look good in another way,” says Rand. And they were right: This time, the Choosers were no longer swayed by punishment, and the Punishers were less likely to dole it out.