Estimated reading time: 10 minutes.

Most pro-life people have at least one major goal in common: we want to see abortion become illegal. So why do we disagree so much about how we should get there? Primarily, it’s that we have different beliefs about what is effective. If we all agreed that a given strategy had the highest probability of succeeding in the shortest amount of time, then we could probably all get on board with that strategy.[1]

There are many reasons why we disagree so much about what is likely to work. People have different experiences, different personalities, different strengths, different mentors, confirmation bias, misunderstandings, sin, and good old-fashioned stubbornness. It’s pretty difficult to do much about these things. But there’s another factor that contributes to our disagreements that we can actually do something about, because it is a very correctable error in reasoning: We have an ingrained impulse to be results-oriented. If we all learned to recognize that impulse, we would actually have fewer disagreements.

Being results-oriented means believing explicitly or implicitly that a given action is praiseworthy or blameworthy on the basis of the results of the action.[Tweet that] In other words, when someone is trying to evaluate the effectiveness of an action, an argument, a method, or anything else, they look at the results. If it had a positive result, they declare the action/argument/method to be effective. If it didn’t have a positive result, they declare it to be ineffective.

Results-Oriented Reasoning Illustrated By Poker

I first started to think about results-oriented reasoning in the context of card games such as poker, Hearthstone, and Magic the Gathering. A YouTuber named Sean Plott (also known as Day9) explained it really well in the context of poker. (Here’s a link to the video. Language warning.) He said when you’re playing poker, you need to focus on your inputs, not your outputs. In other words, you need to focus on making correct decisions, not on whether or not you win the hand.

Imagine you’re playing Texas Hold ‘Em. You know what your opponent has and you know what you have. If the next card on the top of the deck (the River) is an Ace of Spades, you will win, but if it’s any of the other forty cards, you will lose. You have a one in forty chance of winning the hand but you have to go “all in” to see the last card. There is actually a correct decision here, regardless of whether the top card is the Ace of Spades. Assuming this isn’t your last chance to get back into the game, it is objectively mathematically foolish to call. If you don’t call and the next card happens to be the Ace of Spades, you still made the right decision. If you do call and the next card happens to be the Ace of Spades and you win as a result, you still made the wrong decision.

You can’t just look at whether or not you won the hand to determine whether or not you played correctly, because you aren’t entirely in control of the results (what Day9 calls your outputs). If you play correctly, you can maximize your odds of success, but your activity is not the only relevant causal force. One of the problems with results-oriented reasoning is that there are multiple causal forces happening in most situations, and we have no control over most of them.[Tweet that] We might not even be aware of some of them!

If you want to determine who the best chess player is between two people, you can pretty safely trust the results. Chess isn’t a game of chance, it is pure strategy, so the better strategist should win. Someone could have an off-day, but if one player consistently beats the other, you can safely rely on those results to tell you who the better player is.

Results-Oriented Reasoning Applied to Pro-Life Outreach

But results can only tell you so much in a complicated interpersonal dynamic such as a conversation about abortion. Let’s imagine my colleague Rachel Crawford and I both have ten conversations with pro-choice people on a given outreach day on a college campus. Let’s suppose I see five people change their minds and Rachel sees zero people change their minds. If we’re focused only on our outputs, I might evaluate that I am a much more effective pro-life advocate than Rachel or that I was using superior arguments. But there are many other causal forces or variables in this evaluation than just the inputs from Rachel and me. There are plenty of possible explanations for why I changed more minds than Rachel. For instance, she could have happened to talk to people who were more closed-minded than the people I talked to, or she could have been catching students who were headed to class instead of coming out so they couldn’t talk as long. It might be that I was grumpy and off-putting that day, but I happened to talk to some unusually open-minded people.

We cannot look at such results as proof. There are simply too many variables we don’t know about. Sometimes foolish actions are successful, and sometimes wise actions are unsuccessful.[Tweet that!] Sometimes we get lucky. Sometimes we get unlucky. We shouldn’t imitate foolish actions just because we have seen them work.

Here’s What I’m Not saying…

I’m not saying we should completely ignore results. I’m saying we should be humble and cautious before coming to conclusions based on them. We need to actively look for alternative ways to explain our results.

The problem with results-oriented reasoning has everything to do with time. The question of whether it was wise to make a decision at a given time has everything to do with what we knew at the time. This is why if you fold in the poker example, not knowing that the Ace of Spades was the next card, you made the right decision. It is much easier to evaluate poker than most decisions, because poker is much simpler, but the principle still applies. I can look back at a conversation with a pro-choice person and say, “Yeah, even though the conversation went fine, I really should have been slower to speak and quicker to listen at this point, and I should have known that at the time.”

Let’s imagine Rachel wants to dress up in a dinosaur outfit for our college outreach event in November. I think that the cons of being weird and off-putting outweigh the pros of grabbing people’s attention or making them laugh. Rachel disagrees. My evaluation before the November outreach is that it’s unwise to wear the costume, but I’m open to experiments, so I tell her she can give it a try. Let’s imagine that it is somewhat successful. It goes better than I expected, but it still created some problems. Rachel and I need to have a different evaluation when we discuss whether she should do it at a December outreach. Let’s suppose I think it is still better to not wear it, but I’m a little less confident because I’m thinking about the results of the November outreach.

This is critical: I’m not being results-oriented by considering the results of the November outreach. Those results aren’t affecting my evaluation of whether or not it was wise based on what we knew at the time, but it does affect my evaluation of whether we should do it again in December now that we know more. It is not results-oriented to use the results of the November outreach to contribute to our analysis of what we should do at the December outreach. But it would be results-oriented to use the results of the November outreach to make a claim like, “Aha! See? The dinosaur method was a wise decision because look, it got good results!” If Rachel said this after the November outreach, she would be using her new knowledge to justify a decision that she made before she had the new information. Since she didn’t have access to those results when she first put on the costume, she cannot claim it was a wise decision on that basis.

Why is This Problem So Common?

I have a theory for why anecdotes are so rhetorically effective: it’s because human beings are inherently prone to engage in confirmation bias (the tendency to search for, interpret, or prioritize information in a way that confirms one’s beliefs). Changing your mind about something is hard. It means admitting you were wrong. Sometimes it means changing your life. A personal anecdote is a convenient excuse to believe something you want to believe. It feels like it confirms your view about an issue. I’ll give a few hypothetical examples, just to illustrate my argument about how this process might look like in specific cases.

Suppose a given person that was already suspicious of vaccines heard of a case of someone who was vaccinated and later became autistic. It would be natural (though illogical) for that person to become even more suspicious of vaccines. In this case, anecdotal bias teams up with the post hoc fallacy (“after, therefore because of”) to make a person more confident of her view without good reason. Suppose a given Christian was suspicious that Islam was really a vast conspiracy to take over the world and every single Muslim adult was in on it. Every case of terrorism from a radical Muslim might seem like confirmation of his bias against Muslims. After all, the act of terrorism is consistent with the theory that all Muslim adults are pro-terrorism (if that sentence offends you, reread it more carefully). But notice the reasoning problem: there are other theories that can explain it just as well! It could be that some Muslim adults are kind, peaceful, and anti-terrorism, and that some Muslims are pro-terrorism. That theory can explain the instance of a single act of radical Muslim terrorism just as well as the more suspicious one, so that act doesn’t actually lend any evidence to the more suspicious theory. Suppose a given atheist believed that all Christians were obnoxious, self-righteous hypocrites. He has a conversation with his Christian grandmother who fears for her grandson’s soul and well-meaningly but imperfectly tries to share the gospel with him. The atheist knows his grandmother has a drinking problem, and concludes that he was right: of course she’s a hypocrite, because all Christians are.

In all of these cases, the person becomes more confident of his belief because they focus on the fact that their belief explains a specific case without considering alternative theories. I’m not saying that most people go through careful, conscious reasoning processes like these. I’m saying most people go through the exact same type of reasoning processes subconsciously and it feels a lot more reasonable at the time. If you strive for intellectual honesty and personal virtue, you have to be on the lookout for this mistake.

“So, I’m just not supposed to be confident about anything?”

You may find this argument to be frustrating and unsatisfying. I think that frustration is part of why people rely too heavily on anecdotes. We have a natural desire to be confident about the things we believe. But some of the things we believe are not the kinds of things we should be that confident about. If you’re only really confident about things for which you have really good justification, then you can’t be really confident as often. I’m really confident that abortion is wrong, because the arguments are clearly stronger than arguments for abortion. I’m not nearly as confident about what the best methods to do pro-life work are, because it is much harder to have the kind of data you need in order to be that justifiably confident about a method.

Do the best that you can, seek out the strongest arguments against your view (reject confirmation bias), try to reason honestly and go where truth leads, and be comfortable with uncertainty. It is better to hold your views loosely than to be overconfident.

Endnotes

[1] With an obvious exception for a strategy that would require doing immoral things, such as attacking abortion practitioners. Even if we believed we could stop abortion by doing that, we wouldn’t.

Question: Are you convinced of the problem? If so, do you have any examples of ways you’ve struggled with it? If you aren’t convinced, why not?



Facebook has greatly reduced the distribution of our stories in our readers’ newsfeeds and is instead promoting mainstream media sources. When you share to your friends, however, you greatly help distribute our content. Please take a moment and consider sharing this article with your friends and family. Thank you.



Please tweet this article!

Tweet : Personal Experiences Don’t Prove Anything

: Tweet : The Problem with Results-Oriented Reasoning

: Tweet : Being results-oriented means believing explicitly or implicitly that a given action is praiseworthy or blameworthy on the basis of the results of the action.

: Tweet : One of the problems with results-oriented reasoning is that there are multiple causal forces happening in most situations, and we have no control over most of them.

: Tweet: Sometimes foolish actions are successful, and sometimes wise actions are unsuccessful.

The post “Personal Experiences Don’t Prove Anything” originally appeared at the Equal Rights Institute blog. Subscribe to our email list with the form below and get a FREE gift. Click here to learn more about our pro-life apologetics course, “Equipped for Life: A Fresh Approach to Conversations About Abortion.”

The preceding post is the property of Timothy Brahm (apart from quotations, which are the property of their respective owners, and works of art as credited; images are often freely available to the public,) and should not be reproduced in part or in whole without the expressed consent of the author. All content on this site is the property of Equal Rights Institute unless the post was written by a co-blogger or guest, and the content is made available for individual and personal usage. If you cite from these documents, whether for personal or professional purposes, please give appropriate citation with both the name of the author (Timothy Brahm) and a link to the original URL. If you’d like to repost a post, you may do so, provided you show only the first three paragraphs on your own site and link to the original post for the rest. You must also appropriately cite the post as noted above. This blog is protected by Creative Commons licensing. By viewing any part of this site, you are agreeing to this usage policy.