JOHN NASH arrived at Princeton University in 1948 to start his PhD with a one-sentence recommendation: “He is a mathematical genius”. He did not disappoint. Aged 19 and with just one undergraduate economics course to his name, in his first 14 months as a graduate he produced the work that would end up, in 1994, winning him a Nobel prize in economics for his contribution to game theory.

On November 16th 1949, Nash sent a note barely longer than a page to the Proceedings of the National Academy of Sciences, in which he laid out the concept that has since become known as the “Nash equilibrium”. This concept describes a stable outcome that results from people or institutions making rational choices based on what they think others will do. In a Nash equilibrium, no one is able to improve their own situation by changing strategy: each person is doing as well as they possibly can, even if that does not mean the optimal outcome for society. With a flourish of elegant mathematics, Nash showed that every “game” with a finite number of players, each with a finite number of options to choose from, would have at least one such equilibrium.

His insights expanded the scope of economics. In perfectly competitive markets, where there are no barriers to entry and everyone’s products are identical, no individual buyer or seller can influence the market: none need pay close attention to what the others are up to. But most markets are not like this: the decisions of rivals and customers matter. From auctions to labour markets, the Nash equilibrium gave the dismal science a way to make real-world predictions based on information about each person’s incentives.

One example in particular has come to symbolise the equilibrium: the prisoner’s dilemma. Nash used algebra and numbers to set out this situation in an expanded paper published in 1951, but the version familiar to economics students is altogether more gripping. (Nash’s thesis adviser, Albert Tucker, came up with it for a talk he gave to a group of psychologists.)

It involves two mobsters sweating in separate prison cells, each contemplating the same deal offered by the district attorney. If they both confess to a bloody murder, they each face ten years in jail. If one stays quiet while the other snitches, then the snitch will get a reward, while the other will face a lifetime in jail. And if both hold their tongue, then they each face a minor charge, and only a year in the clink (see diagram).

There is only one Nash-equilibrium solution to the prisoner’s dilemma: both confess. Each is a best response to the other’s strategy; since the other might have spilled the beans, snitching avoids a lifetime in jail. The tragedy is that if only they could work out some way of co-ordinating, they could both make themselves better off. The example illustrates that crowds can be foolish as well as wise; what is best for the individual can be disastrous for the group. This tragic outcome is all too common in the real world. Left freely to plunder the sea, individuals will fish more than is best for the group, depleting fish stocks. Employees competing to impress their boss by staying longest in the office will encourage workforce exhaustion. Banks have an incentive to lend more rather than sit things out when house prices shoot up.

Crowd trouble

The Nash equilibrium helped economists to understand how self-improving individuals could lead to self-harming crowds. Better still, it helped them to tackle the problem: they just had to make sure that every individual faced the best incentives possible. If things still went wrong—parents failing to vaccinate their children against measles, say—then it must be because people were not acting in their own self-interest. In such cases, the public-policy challenge would be one of information.

Nash’s idea had antecedents. In 1838 August Cournot, a French economist, theorised that in a market with only two competing companies, each would see the disadvantages of pursuing market share by boosting output, in the form of lower prices and thinner profit margins. Unwittingly, Cournot had stumbled across an example of a Nash equilibrium. It made sense for each firm to set production levels based on the strategy of its competitor; consumers, however, would end up with less stuff and higher prices than if full-blooded competition had prevailed.

Another pioneer was John von Neumann, a Hungarian mathematician. In 1928, the year Nash was born, von Neumann outlined a first formal theory of games, showing that in two-person, zero-sum games, there would always be an equilibrium. When Nash shared his finding with von Neumann, by then an intellectual demigod, the latter dismissed the result as “trivial”, seeing it as little more than an extension of his own, earlier proof.

In fact, von Neumann’s focus on two-person, zero-sum games left only a very narrow set of applications for his theory. Most of these settings were military in nature. One such was the idea of mutually assured destruction, in which equilibrium is reached by arming adversaries with nuclear weapons (some have suggested that the film character of Dr Strangelove was based on von Neumann). None of this was particularly useful for thinking about situations—including most types of market—in which one party’s victory does not automatically imply the other’s defeat.

Even so, the economics profession initially shared von Neumann’s assessment, and largely overlooked Nash’s discovery. He threw himself into other mathematical pursuits, but his huge promise was undermined when in 1959 he started suffering from delusions and paranoia. His wife had him hospitalised; upon his release, he became a familiar figure around the Princeton campus, talking to himself and scribbling on blackboards. As he struggled with ill health, however, his equilibrium became more and more central to the discipline. The share of economics papers citing the Nash equilibrium has risen sevenfold since 1980, and the concept has been used to solve a host of real-world policy problems.

One famous example was the American hospital system, which in the 1940s was in a bad Nash equilibrium. Each individual hospital wanted to snag the brightest medical students. With such students particularly scarce because of the war, hospitals were forced into a race whereby they sent out offers to promising candidates earlier and earlier. What was best for the individual hospital was terrible for the collective: hospitals had to hire before students had passed all of their exams. Students hated it, too, as they had no chance to consider competing offers.

Despite letters and resolutions from all manner of medical associations, as well as the students themselves, the problem was only properly solved after decades of tweaks, and ultimately a 1990s design by Elliott Peranson and Alvin Roth (who later won a Nobel economics prize of his own). Today, students submit their preferences and are assigned to hospitals based on an algorithm that ensures no student can change their stated preferences and be sent to a more desirable hospital that would also be happy to take them, and no hospital can go outside the system and nab a better employee. The system harnesses the Nash equilibrium to be self-reinforcing: everyone is doing the best they can based on what everyone else is doing.

Other policy applications include the British government’s auction of 3G mobile-telecoms operating licences in 2000. It called in game theorists to help design the auction using some of the insights of the Nash equilibrium, and ended up raising a cool £22.5 billion ($35.4 billion)—though some of the bidders’ shareholders were less pleased with the outcome. Nash’s insights also help to explain why adding a road to a transport network can make journey times longer on average. Self-interested drivers opting for the quickest route do not take into account their effect of lengthening others’ journey times, and so can gum up a new shortcut. A study published in 2008 found seven road links in London and 12 in New York where closure could boost traffic flows.

Game on

The Nash equilibrium would not have attained its current status without some refinements on the original idea. First, in plenty of situations, there is more than one possible Nash equilibrium. Drivers choose which side of the road to drive on as a best response to the behaviour of other drivers—with very different outcomes, depending on where they live; they stick to the left-hand side of the road in Britain, but to the right in America. Much to the disappointment of algebra-toting economists, understanding strategy requires knowledge of social norms and habits. Nash’s theorem alone was not enough.

A second refinement involved accounting properly for non-credible threats. If a teenager threatens to run away from home if his mother separates him from his mobile phone, then there is a Nash equilibrium where she gives him the phone to retain peace of mind. But Reinhard Selten, a German economist who shared the 1994 Nobel prize with Nash and John Harsanyi, argued that this is not a plausible outcome. The mother should know that her child’s threat is empty—no matter how tragic the loss of a phone would be, a night out on the streets would be worse. She should just confiscate the phone, forcing her son to focus on his homework. Mr Selten’s work let economists whittle down the number of possible Nash equilibria. Harsanyi addressed the fact that in many real-life games, people are unsure of what their opponent wants. Economists would struggle to analyse the best strategies for two lovebirds trying to pick a mutually acceptable location for a date with no idea of what the other prefers. By embedding each person’s beliefs into the game (for example that they correctly think the other likes pizza just as much as sushi), Harsanyi made the problem solvable. A different problem continued to lurk. The predictive power of the Nash equilibrium relies on rational behaviour. Yet humans often fall short of this ideal. In experiments replicating the set-up of the prisoner’s dilemma, only around half of people chose to confess. For the economists who had been busy embedding rationality (and Nash) into their models, this was problematic. What is the use of setting up good incentives, if people do not follow their own best interests? All was not lost. The experiments also showed that experience made players wiser; by the tenth round only around 10% of players were refusing to confess. That taught economists to be more cautious about applying Nash’s equilibrium. With complicated games, or ones where they do not have a chance to learn from mistakes, his insights may not work as well. The Nash equilibrium nonetheless boasts a central role in modern microeconomics. Nash died in a car crash in 2015; by then his mental health had recovered, he had resumed teaching at Princeton and he had received that joint Nobel—in recognition that the interactions of the group contributed more than any individual.

LAST IN THIS SERIES:

• The Mundell-Fleming trilemma