To some, exposure to Economics ends at Nudge, Thinking Fast and Slow or listening to Economists being crucified on television for yet again failing to predict something correctly¹. These people will immediately be able to tell you that economics is flawed and will be able to point to the root of that flaw: Economists assume everyone is rational. Behavioural Economics and common sense tell us that this is not the case. This means that the discipline is built on faulty foundations and can be pushed to the side and all those economists can go do something practical, like dentistry, instead of this whole “Masters of The Universe” nonsense.

The truth resists simplicity, one should be careful of reducing over a centuries worth of work and assuming you have had a eureka moment which has avoided the notice of thousands of academics.

First, here is a link of a list of counter-intuitive results from game theory. Keep in mind, all of these are derived from models using perfectly rational agents and all of them make predictions that are, to some extent or another, consistent with real world behaviour and incentives. In other words, rational agents can behave in ways that appear, at first glance, to be irrational: modelling agents as rational has the power to identify these situations, while modelling them as irrational does not. This means that, if rational agents reach the same incorrect outcome as humans, then the fault doesn’t lie in the humans but something else.

Now, when you model economic behaviour, if you do start under the assumption that people are irrational, you have the choice to do two things. One could explicitly and systematically model the way in which you believe they are irrational and have a chance at making predictions or explaining real world behaviour. Alternatively, one could assume this irrationality is “un-model-able” or unpredictable, in which case you have no model, so you have no insight; you just throw your hands in the air and give up.

The latter is obviously not a good option. The former, in my opinion, the same as to modeling rational agents. If a person is irrational in a systematic, way explainable by a model, it is simply a different form of rationality. That last point is exactly what economists have been doing for decades now. Altruism, willfull ignorance, racism and other types of biases, anchoring, habit formation, the endowment effect, and so on and so on, these are all things that could never be explained by the caricature of rationality I would assume you hold in your mind.

However, all these phenomena can be explained with more specific, more detailed, more realistic rational agents. Fashion provides us with an example:

“The physics of a belt — it pushes in, and hopes that it creates enough friction to have your pants not fall down. Well, that didn’t make sense. Here I was talking to my students about physics, and what direction gravity was pulling and moving things, and here I was wearing a belt. And I thought about it a little bit, and I was like, well, wait a minute. I need something to pull up, if gravity is pulling down. Well, that’s suspenders” ²

Belts, simply put, are bad at their job: suspenders are far better. A fully rational agent would wear suspenders. However, if we modelled our agents to be realistic and imbued in them the idea of fashion (because even the editors of this blog know suspenders are a social faux pas!) they would wear a belt. One seeing a belt as irrational only simplifies fashion by assuming that we only dress for practicality.

In terms of more serious examples: the Dictator and Ultimatum games’ experimental results cannot be explained by greedy rational agents (as neoclassical economics supposes we all are) but rather those who care about altruism and/or fairness. The emergence of cooperation in finite Prisoner’s Dilemmas can as well, or when people have a taste for cooperation, or when the game is simply repeated an uncertain number of times.

For a more technical example, take the Revelation Principle. It is used very often in models where there is asymmetric information (one party in a transaction has more relevant information than the other – like a car salesman knows more about his cars than any of his customers). For example, when you apply for a job, the firm owner would like to offer you a labour contract to work for her, but she does not know if, say, you are a very efficient worker or a relatively inefficient worker (yet it is worthwhile to employ in either case, let’s say). If you are very efficient, you may have an incentive to conceal that fact in order to be expected to do very little and then earn a bonus by overshooting that target, so you would be paid more than if the target was correctly set. So, if she offers a contract that is designed cleverly, you would not be able to hide your ability and she can pay you the wage that you are worth. When we solve these types of models, we use the Revelation Principal, which says that this outcome is the same as having you simply telling her how good you are. To do this there needs to be an incentive in the contract for you to do so.

This is what we call “direct revelation mechanism”, where you simply announce your type while the firm owner makes sure you will actually want to. It is probably not a very realistic prediction about how the world works but it is extremely powerful because it is a very simple way of understanding a potentially complicated process.

So, while many models may use rational agents (of varying degrees) this is not done out of some ignorance of the real world outside of an ivory tower. Instead these models are designed to mimic human behaviour as best as possible or to highlights ways in which we are predictably irrational.

Notes

¹J.K Galbraith once described economic forecasting as “designed to make astrology look respectable” and this was decades before the financial crisis.

2Quote from Freakonomics Podcast

Based on a post by u/jazzninja88

Edited, rewritten and new material added by Matthew Bradbury