One of the great pleasures of studying human behaviour is to see that what we find in our experiments, what we theorize in our papers and textbooks – as unlikely and counterintuitive it appears to be – actually predicts what happens in so-called real life. Take, for instance, the current build-up of a stock-market bubble in the UK, happening even more dramatically in the US. In the UK, the FTSE 100 is on its way to surpass the record set during the high times of the dotcom bubble and already surpassed the levels reached during the 2008 financial bubble; in the US the Dow Jones has already reached new record highs. Despite having recently experienced the devastating consequences of a stock market bubble bursting, banks and investors return a few years later to the same hyperbolic forecasts and predictions, and start to build up another bubble. It is as if the past did not exist. Compare this behaviour with the following anecdote, which most business school students probably know.

In the year 1976 a small team of experts in Israel were developing a new high school curriculum for the Ministry of Education. After a year of working they met to determine how much time they required to finish the project. Each member wrote on a piece of paper the number of months they thought was needed. The predictions ranged from 18 months to 30 months. One of the team members then asked a fellow member, who was a distinguished expert in developing curriculums, to recall other teams just like them, at a similar stage. How long did it take those groups to develop their curriculum? After taking a long pause the expert told the group that 40% of similar teams gave up on the project all together. As for the remaining 60%, they completed the curriculum within seven years. The members wanted to know if the expert believed their team was exceptionally skilled and thus likely to finish the task sooner. The answer was no – the expert evaluated the abilities of the members to be slightly below average. Despite this sober evaluation the team remained highly optimistic that they would finish the project in less than three years. In the end, it took them eight.

Nobel Prize laureate Daniel Kahneman recounted this story when he and Amos Tversky introduced the phenomenon of the planning fallacy (Kahneman & Tversky, 1979). The anecdote comprises all parts that constitute the planning fallacy. First, a team makes overly optimistic predictions for how long it will take to complete a task. Second, they learn the history of comparable tasks, which is rather pessimistic. Third, and this is the most interesting, counterintuitive part: they ignore the past and hold on to their overly optimistic outlook. A vast literature documents projects that failed or never got started because of overly optimistic forecasts, starting with minor software projects and ending with major public developments such as airports and train stations. The “news” that the home country of the Olympics or the World Cup is way behind the time table is a testament to the power of the planning fallacy.

But why do people ignore the past when they try to forecast the future? The answer is that people convince themselves that the past is not relevant this time around. For instance, when software engineers are asked if they use past experience to inform their plans, they responded that “No… because it’s a unique working environment and I’ve never worked on anything like it” or “No, not relevant. It’s not the same kind of project at all” (Buehler et al., 2010). 75% of software projects are completed after the predicted date. When looking into the future, people construct a best-case scenario in which one step inevitably leads to another step, creating a scenario where the impossible appears rather certain.

During the dotcom bubble, people in the financial industry were convinced that the “internet changes everything,” even generating predictions that the stock market would rise from now until forever. In the late 2000’s, the development of new financial innovations (labelled with acronyms that makes the acronym-crazy world of science blush) supposedly changed everything; the market from now on was always priced right and risk was balanced perfectly due to the fruits of labour of financial geniuses (find here an excellent summary). In both cases, investors argued that this time is different, the past is not relevant.

While all this might not be a particularly new story, it has some interesting ethical implications. Of course, once we lose tremendous amounts of money–or the little we had to begin with–we want to blame someone. And professionals should be aware of all these problems, and deserve a fair amount of criticism (and legal action). But when we look at the experiments, one might start to wonder how likely it is that people in the financial industry or in other bubble-prone parts of the economy (housing market, I am looking at you) can overcome their biases. Even when participants in the lab are confronted with the past directly before making predictions about the future, they fail to incorporate this information into their forecasts. Moreover, the power of the planning fallacy increases when people really want the future to be great. While participants in the lab become far more optimistic when there is a chance for, say, £10 more, imagine what happens when there is a chance for £100 million more. Such optimistic forecasts drive not only the behaviour of people betting with other people’s money, but also investors who bet with their own money. Since human forecasts drive the stock market, it seems almost inevitable that bubbles build. The moral accountability of the people in stock markets appears to be restricted by a hard-wired bias to see the future through rose-coloured glasses.

For most people, being human also means being almost incapable of foreseeing bad things. In our everyday lives, this is more often than not a blessing, sheltering us from stress and giving us the joy of anticipation. The ancient Greeks thought that Prometheus is to blame/praise, giving us not only fire (thanks for that), but also taking away our ability to foresee future doom. In their view, humans are more or less incapable of anticipating bad things happening. So, whenever behaviour is morally judged, we should keep in mind how much control the judged actually had over it and whether or not they could have known better.