Daniel Kahneman, a Nobel Prize-winning psychologist and the author of the new book “Thinking, Fast and Slow,” changed the way people think about thinking by asking them questions. They weren’t trick questions, either. Instead, Kahneman relied almost exclusively on straightforward surveys, in which he described various scenarios. Here’s a sample:

The U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. If program A is adopted, 200 people will be saved. If program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved. Which of the two programs would you favor?

When Kahneman put this question to a few hundred physicians, seventy-two per cent chose option A, the safe-and-sure strategy. Most doctors would rather save a certain number of people for sure than risk the possibility that everyone might die.

Now consider this scenario:

The U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. If program C is adopted, 400 people will die. If program D is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die. Which of the two programs would you favor?

The two different hypotheticals, of course, examine identical dilemmas: saving one-third of the population is the same as losing two-thirds. And yet, doctors reacted very differently depending on how the question was framed. When the possible outcomes were stated in terms of deaths (and not survivors), physicians were suddenly eager to take chances: seventy-eight per cent chose option D.

Why are doctors so inconsistent? Kahneman and his longtime collaborator, Amos Tversky, explained these contradictory responses in terms of loss aversion, or the fact that losses hurt more than gains feel good. In fact, people hate losses so much that merely framing a choice in terms of a potential loss can shift their preferences. Like those physicians, people are suddenly willing to risk losing everything if there’s a chance they might lose nothing.

Although our dislike of losses might seem obvious—“You need to have studied economics for many years before you’d be surprised by my research; it didn’t shock my mother at all,” Kahneman says—the discovery of loss aversion proved to be an important refutation of human rationality. Unlike homo economicus, that imaginary species featured in macroeconomics textbooks, Kahneman and Tversky demonstrated that real people don’t deal with uncertainty by carefully evaluating all of the relevant information. They stink at statistics and rarely maximize utility. Instead, their choices depend on a long list of mental short cuts and intemperate emotions, which often lead them to pick the wrong options.

Since the Israeli psychologists began studying loss aversion in the early nineteen-seventies, it has been used to explain a stunning variety of irrational behaviors, from the misguided decisions of investors—they refuse to sell losing stocks—to the stickiness of condo prices in the aftermath of a housing bubble. It’s been used to justify our fondness for the status quo—the present may stink, but we still don’t want to lose it—and the cowardice of N.F.L. coaches, who are far too afraid to go for it on fourth down. Loss aversion even excuses our social habits: studies have shown that it generally takes at least five kind comments to compensate for a single criticism. (The ratios are even worse for criminals: a person convicted of murder must perform at least twenty-five acts of “life-saving heroism” before he is forgiven.) This is an impressive amount of explanatory firepower for a theory rooted in hypotheticals.

It’s impossible to overstate the influence of Kahneman and Tversky. Like Darwin, they helped to dismantle a longstanding myth of human exceptionalism. Although we’d always seen ourselves as rational creatures—this was our Promethean gift—it turns out that human reason is rather feeble, easily overwhelmed by ancient instincts and lazy biases. The mind is a deeply flawed machine.

Nevertheless, there is a subtle optimism lurking in all of Kahneman’s work: it is the hope that self-awareness is a form of salvation, that if we know about our mental mistakes, we can avoid them. One day, we will learn to equally weigh losses and gains; science can help us escape from the cycle of human error. As Kahneman and Tversky noted in the final sentence of their classic 1974 paper, “A better understanding of these heuristics and of the biases to which they lead could improve judgments and decisions in situations of uncertainty.” Unfortunately, such hopes appear to be unfounded. Self-knowledge isn’t a cure for irrationality; even when we know why we stumble, we still find a way to fall.

Consider the story of Harry Markowitz, a Nobel Prize-winning economist who largely invented the field of investment-portfolio theory. By relying on a set of complicated equations, Markowitz was able to calculate the optimal mix of financial assets. (Due to loss-aversion, most investors hold too many low-risk bonds, but Markowitz’s work helped minimize the effect of the bias by mathematizing the decision.) Markowitz, however, was incapable of using his own research, at least when setting up his personal retirement fund. “I should have computed the historical co-variances of the asset classes and drawn an efficient frontier,” Markowitz later confessed. “Instead, I visualized my grief if the stock market … went way down and I was completely in it. My intention was to minimize my future regret. So I split my contributions 50/50 between bonds and equities.”