“It is now the scientific consensus that our risk-avoidance mechanism is not mediated by the cognitive modules of our brain, but rather by the emotional ones. This may have made us fit for the Pleistocene era. Our risk machinery is designed to run away from tigers; it is not designed for the information-laden modern world.” – Nicholas Taleb

If you haven’t read Daniel Kahneman’s “Thinking, Fast and Slow” (2011), you should. If you are going to be a policy writer, doctor, judge, consultant, or hold any other position that gives you influence over others’ lives, you must. He will explain the irrational decisions you make in your life, your businesses, and your policies. Think you’re entirely logical and immune to making logical errors? Take this quiz that we have compiled and record your responses to compare below. For your own benefit, consider writing next to each question how certain you feel about your answer. (All questions are taken directly from, or modified versions taken from, “Thinking, Fast and Slow”.)

If you have already studied his ideas extensively, or know that you have logical flaws, we challenge you to take the quiz anyway (remembering to record your responses and your level of certainty).

We’ll publish the results in the next month and more data is better data. It’s only ten questions and has an estimated completion time of about five minutes. We don’t expect you to do any onerous calculations. We’ll wait.

…

We’ll come back to the answers shortly, but let us begin with a discussion on where this work fits in to the modern world. The world is changing and people expect more from those with authority. Economists are not expected to describe events that have occurred previously, they are meant to build models to predict future events (which they are notoriously bad at doing). Politicians are no longer expected to provide opportunities; they are mandated to ensure the populace acts in accordance with good outcomes in mind. This can be seen in the Danish “Fat Tax”, Australian “Plain Packaging”, and a host of other Pigovian tax methods. When more information isn’t convincing people to make ‘better’ choices, politicians need to be able to ‘nudge’ them towards doing so. “Thinking, Fast and Slow” describes many of the cognitive errors people make when they believe they are making rational decisions, and in some cases gives you tools to use to become less prone to these same errors. But as Kahneman himself states in the introduction, by nature you don’t know you’re using these shortcuts. The most useful outcome of reading his work is going to be catching these errors in the thinking of your colleagues and improving your team performance. Below we discuss a selection of the errors from the book, selected for their ease in implementing as on online test, and once again we strongly recommend giving it a read to ensure you achieve the best outcomes you can for yourself and those relying on you.

To the answers:

1. Linda

Question three and question eight are about Linda. Linda is an artsy type, with activist tendencies and a stronger likelihood of being a teacher than an insurance sales person. The question is testing your understanding of overall probability. Below we have a pictorial representation of 1,000 universes of Linda (the area inside the box), a blue circle representing the universes where Linda is a bank teller, and the yellow representing universes where she is a feminist.

Maybe the relationship even looks like this:

Many people rank feminist bank teller above bank teller in both questions, but there is no way for it to be more probable for Linda to be a feminist bank teller than for her to be a bank teller; by definition if she is a feminist bank teller she is a bank teller but the converse is not true.

2. A New Disease: Framing

Questions four and seven are about how the framing of a question impacts your decisions. In a population of 600 people, 400 dying or 200 being saved are identical outcomes. Program A and program X are the same, as are program B and program Y. Despite identical outcomes there is a significant chance that the wording changed your policy choices. Now you know they’re the same options, which do you choose?

Program J: 400 people will definitely die and 200 people will definitely be saved.

Program K: A one third probability that nobody will die and everybody will be saved, and a two thirds probability that everybody will die and nobody will be saved. How confident are you in this decision compared to your previous answers?

3. Predicting Future Performance

There is no correct answer to question five, but there is a wrong answer:

$55,000, $25,300, $114,400, $53,900, $26,400

The data set provides you with a series of data for one year and no other context for the variability. In the real world, a substantial portion of the performance of anything is due to luck and the more removed you are from the average the less likely you are to continue at your current level of performance (regression towards the mean); it is unlikely that store three is going to perform so extremely well twice in a row. You can look for reasons that store three dominated store two but you have to be wary of this post-hoc analysis; trying to back-explain results is not a great tool for predicting future performance. When taking luck and variability into account a more reasonable answer would look like:

$55,000, $40,150, $84,700, $54,450, $40,700.

(Calculated assuming that half of a store’s performance was due to luck.)

4. Medical Tests of 99% Accuracy

Gut feelings of probabilities are notoriously wrong, even for professional statisticians. The correct answer is 0.9%, calculated with Bayes Rule. The formula for the probability of something after new information is:

New probability = old probability x likelihood.

Where likelihood is a description of how new information changes the probability. In the medical case, new probability = 1/11,000 x likelihood. Even though a test with an accuracy of 99% has a very high likelihood it’s still multiplying by 1/11,000; the end result is low probabilistically.

Similarly with Linda, her likelihood of being a feminist bank teller is higher than that of her being a bank teller, but there are far less feminist bank tellers in the world than bank tellers, so the “old probability” is much lower to begin with. When we make decisions we aren’t very good at taking into account the “base line” information about the world.

Below is a square going into more depth on the medical test program.

We’ve used 1,100,000 people to give nice numbers. As stated the test is 99% accurate (it has detected 99% of people who have the disease and 99% of people who do not have the disease). If we look at the “tests positive” row you can see that only 99 out of 11,098 people who test positive in this population actually have the disease, or 0.89%. If this happens to you, find a new doctor.

5. Appetite for Risk

The final questions (9 and 10) ask you to make two decisions in tandem, this means you have four possibilities.

The interesting portion of these questions lies in comparing B and C. When looking at one option at a time, individuals’ risk aversion means they’re likely to select option B, but when looking at the total outcomes, option C dominates option B; both the likely bad outcome and the unlikely good outcome are superior.

The take away point here is that although your life may be a series of one off decisions, those decisions compound with other decisions you have made, and will make in the future. Your appetite for risk may be leading you to worse outcomes.

“Thinking, Fast and Slow” is not a perfect book, and it has its share of critics; however, even if you got nearly every question above right while taking a quiz odds are the book will have something to teach you. There is much more to learn from this book than the five examples above; it is well worth a read (consider buying it, the Nobel Prize doesn’t pay that much anymore), but then consider reading the book’s spiritual rival: “Risk Savvy”, by Gerd Gigerenzer (2014).