Nash was part of the clique of mathematicians and graduate students advancing the nascent discipline of game theory under Tucker, in the purest mathematical sense, i.e. largely uninterested in relating their research to applications in the real world. According to economist and personal friend of Nash at the time, Martin Shubik:

"The graduate students and faculty in the mathematics department interested in game theory were both blissfully unaware of the attitude in the economics department, and even if they had known it, they would not have cared.. The contrast of attitudes between the economics department and the mathematics department was stamped on my mind soon after arriving at Princeton. The former projected an atmosphere of dull business-as-usual conservatism of a middle league conventional Ph.D. factory; there were some stars but no sense of excitement or challenge. The latter was electric with ideas and the sheer joy of the hunt. Psychologically they dwelt on different planets. If a stray ten-year-old with bare feed, no tie, torn blue jeans and an interesting theorem had walked into Fine Hall at tea time, someone would have listened. When von Neumann gave his seminar on his growth model, with few exceptions, the serried ranks of Princeton Economics cold scare forbear to yawn." - Martin Shubik, 1992 - Excerpt, "Finding Equilibrium" by Düppe and Weintraub (2014 p. 94)

The head of the group, Tucker, would go on to supervise virtually all of the future top game theorists at Princeton, including David Gale and 2012 Nobel laureate Lloyd Shapley, in addition to, of course, Nash.

Nash the Game (Hex)

The story of Nash’s most famous result, as often is the case, departs not from an interest in its application or streams of theoretical research, but rather as the consequence of experimentation.

By the late 1940s, the favorite past-time of faculty and graduate students in the common room of Fine Hall was boardgames, including the famous Go and the less famous Kriegspiel. During this time, Nash himself invented a game later to become known as “Hex”, as it was independently invented a few years earlier by a Danish mathematician named Piet Hein, and marketed by the Parker Bros by this name. At the time however, everyone at Princeton simply called the game “Nash”. Nash/Hex is played on a (typically) 14x14 rhombus-like grid of n² hexagonal spaces using Go-stones of black and white. Each two opposite edges are also coloured black and white. Two players alternate placing pieces inside the hexagonal spaces. Once a piece is played, it can never be moved. The goal of each player is to construct a connected path of stones from one edge of the board to its opposite.

11×11 Hex game board showing a winning configuration

Nash was the first to prove that Hex cannot end in a draw. This non-trivial result is called the “Hex theorem”. He didn’t publish the proof, but in 1952 put it in a RAND technical report entitled “Some Games and Machines for Playing Them”. The proof has since been shown to be equivalent to the well-known Brouwer fixed-point theorem:

The Brouwer fixed-point theorem (Brouwer, 1910)

Let K be a topological space homomorphic to a compact, convex subset of Rⁿ and let f ∈ C(K, K), then f has at least one fixed point.

A fixed point is a point x for which f(x) = x, under some conditions on f. We can illustrate the meaning of the theorem informally for the case of the plane, n = 2, by imagining two pieces of grid paper of equal size with coordinate systems on them: Lay one of the pieces of paper flat on a table. Lay the other on top and crumple it without tearing or ripping it, and so that it does not reach outside the flat one. There will then be at least one coordinate point of the crumpled paper that lies directly above its corresponding point on the flat one. An analogous statement is that if you take an ordinary map of a country and lay it out on a table inside that country, there will always be a “You Are Here” pin on the map which represents the same point in the country. Brouwer’s fixed-point theorem is one among many theorems concerning fixed-points, but is particularly well known due to its use in many fields of mathematics, including in John von Neumann’s famous early treatise on general equilibrium theory (Von Neumann, 1937).

The theorem is worth mentioning here because it was used in Nash’s first proof of the Nash equilibrium for which he was eventually awarded the 1994 Nobel Memorial Prize in Economic Sciences. Its extension, the Kakutani fixed-point theorem was also later used by Nash in a more elegant proof of the same result. In the interest of completeness, the Kakutani’s fixed-point theorem states:

The Kakutani fixed-point theorem (Kakutani, 1941)

Let S be a non-empty, compact and convex subset of some Euclidean space Rⁿ. Let φ: S -> 2ˢ be a set-valued function on S with a closed graph and the property that φ(x) is non-empty and convex for all x ∈ S, then φ has a fixed point.

Any point on f(x) = x that intersects the graph of the function (gray area) is a fixed point. x = 0.72 (blue line) is a fixed point since 0.72 ∈ [0.64, 0.82].

To understand its properties, think for instance of a set-valued function f(x) defined on the closed interval [0,1] that maps a point x to the closed interval [1 — x/2, 1- x/4]. Then f(x) must have fixed points. In the diagram above, any point on the red line which intersects the graph of the function (in grey) is a fixed point, and so there are infinitely many fixed points in this particular case.

In addition to his fixed-point proof that Hex cannot end in a draw, Nash also provided a reductio ad absurdum existence proof (1949) that the first player in Hex on a board of any size has a winning strategy. The proof however gives no indication of a correct strategy for play. Nasar writes the following about the discovery:

"One morning in late winter 1949, Nash literally ran into the much shorter, wiry Gale on the quadrangle inside the Graduate College. “Gale! I have an example of a game with perfect information,” he blurted out. “There’s no luck, just pure strategy. I can prove that the first player always wins, but I have no idea what his strategy will be. If the first player loses at this game, it’s because he’s made a mistake, but nobody knows what the perfect strategy is.” - Excerpt, "A Beautiful Mind" by Sylvia Nasar (1998)

The proof is common to a number of other games, and has come to be called the strategy-stealing argument. Here is a highly condensed informal version of Nash’s proof for Hex:

Hex first player winning strategy existence proof (Nash, ~1949) 1. Either the first or second player must win, therefore there must be a winning strategy for either the first or the second player. 2. Let us assume that the second player has a winning strategy. 3. The first player can now adopt the following defense: He makes an arbitrary move. Thereafter he plays the winning second player strategy assumed above. If in playing this strategy, he is required to play on the cell where an arbitrary move was made, he makes another arbitrary move. In this way he plays the winning strategy with one extra piece always on the board. 4. This extra piece cannot interfere with the first player's imitation of the winning strategy, for an extra piece is always an asset and never a handicap. Therefore the first player can win. 5. Because we have now contradicted our assumption that there is a winning strategy for the second player, we are forced to drop this assumption. 6. Consequently, there must be a winning strategy for the first player.

Nash’s Research (1948–58)

In the taxonomy of mathematicians, there are problem solvers and theoreticians, and, by temperament, Nash belonged to the first group. — Nasar (1998)

As a consequence of the mental illness that would later consume him, Nash’s prime research career was remarkably short, essentially only spanning nine years from his arrival at Princeton in 1948 to his diagnosis in 1958. Raoul Bott at Carnegie said this about Nash’s interests as a graduate student:

“Nash liked very general problems. He wasn’t all that good at solving cute little puzzles. He was a much more dreamy person. He’d think a long time. Sometimes you could see him thinking. Others would be sitting there with their nose in a book.”

Of his own graduate research, Nash himself stated in his Nobel autobiography that “As a graduate student I studied mathematics fairly broadly and was fortunate enough, besides developing the idea which led to “Non-Cooperative Games”, also to make a nice discovery relating to manifolds and real algebraic varieties. So I was prepared actually for the possibility that the game theory work would not be regarded as acceptable as a thesis in the mathematics department and then that I could realize the objective of a Ph.D. thesis with the other results.”.

Nash at his doctoral graduation in 1950 (Photo: Princeton University Archives)

Nash ultimately didn’t need to ‘realize the objective of a Ph.D. thesis with other results’. He earned his Ph.D. in mathematics from Princeton in 1950 at the age of 22. His 28-page dissertation was entitled Non-cooperative Games (1950). It is available for free here. Written under the supervision of Tucker, the main result of the paper was the derivation, definition and description of the properties of the Nash equilibrium. For those interested, Nash’s work in game theory was beautifully collected and organized in this article prepared for the Nobel Committee by his friend Harold Kuhn in 1994. Fields medal winner John Milnor in 1998 also wrote notice for the American Mathematical Society listing Nash’s total of 21 publications.

The Bargaining problem (1949)

Nash’s first journal paper (written prior to his work on the Nash equilibrium) — also in game theory — regarded the classic economic problem of bargaining. The problem had previously been investigated by a number of scholars (Cournot, Bowley, Fellner among others) for various purposes including investigations of bilateral monopoly (Nash, 1950a).

Nash (1950a). “The Bargaining Problem. Econometrica 18 (2): p. 155–62.

Nash’s paper describes a bargaining situation where two individuals have the opportunity for mutual benefit, but no action taken by one of the individuals unilaterally (without consent) can affect the well-being of the other. Think of the classic “divide and choose protocol” of two people trying to divide a cake evenly, where one carves and the other chooses which piece he or she wants, providing a so-called envy-free cake-cutting procedure.

Nash’s paper is positioned to be a theoretical discussion of such bargaining situations, as well as to provide a definitive “solution” (meaning determine the amount of satisfaction each individual should expect to obtain) under certain conditions and other “idealizations”. Such idealizations include the assumption that the two individuals are rational and can accurately compare their preferences for various things, have equal bargaining skills and complete information about the preferences of the other person.

Nash’s treatment employs the concept of utility as developed in von Neumann and Morgenstern’s Theory of Games and Economic Behavior (1944). It also employs the concept of expectation in determining what various players’ payoffs will be given various strategies. In his paper, Nash uses as an illustration a man named Mr. Smith who knows he will be given a Buick tomorrow, and so that he may be said to have “Buick expectation”. Similarly, he may have “Cadillac expectation”. If he knew that tomorrow a fair coin would be tossed to decide whether he would get a Buick or a Cadillac, we can say he had 50% Buick and 50% Cadillac expectation.

Nash provides sufficient assumptions for the development of a utility theory for a single individual in such scenarios and proceeds to differentiate his paper from that presented in Theory of Games and Economic Behavior (1944). In his view, the theory there comes short in that it does not attempt to find values for each person’s valuations of the opportunity to engage in a game, unless that games is zero-sum. Nash then goes on to derive values for the anticipation of players in such two-person non-zero-sum games:

We may define a two-person anticipation as a combination of two one-person anticipations. Thus we have two individuals, each with a certain expectation of his future environment. We may regard the one-person utility functions as applicable to the two-person anticipations, each giving the result it would give if applied to the corresponding one-person anticipation which is a component of the two-person anticipation. A probability combination of two two-person anticipations is defined by making the corresponding combinations for their components. Thus if [A, B] is a two-person anticipation and 0 ≤ p ≤ 1, then p[A,B] + (1 - p)[C,D] will be defined as [pA + (1-p)C, pB + (1-p)D]

Nash defines the utility functions u₁, u₂ of two individuals and c(S) as the solution point in a set S which is compact, convex and includes the origin. He puts forth the necessary assumptions and show that these conditions require that the solution be the point of the set in the first quadrant where u₁u₂ is maximized. The compactness of the set guarantees its existence and its convexity its uniqueness.

Figure 1 from Nash’s paper “The Bargaining problem”, illustrating the unique, optimal point of the set S, which maximizes the utilities of player 1 and 2. Photo: Econometrica, 18 (2): p. 160.

Example of a bargaining problem (Nash, 1950a)

Let us suppose that two intelligent individuals, Bill and Jack, are in a position where they may barter goods but have no money with which to facilitate exchange. Further, let us assume for simplicity that the utility to either individual of a portion of the total number of goods involved is the sum of the utilities to him of the individual goods in that portion. We give below a table of goods possessed by each individual with the utility of each to each individual. The utility functions used for the two individuals are, of course, to be regarded as arbitrary. Bill's good, Bill's utility and Jack's utility:

(book, 2, 4), (whip, 2, 2), (ball, 2, 1), (bat, 2, 2),(box, 4, 1) Jack's goods, Bill's utility and Jack's utility:

(pen, 10, 1), (toy, 4, 1), (knife, 6, 2), (hat, 2, 2) The graph for the bargaining situation turns out to be a convex polygon in which the point where the product of the utility gains is maximized is at a vertex and where there is but one corresponding anticipation, which is: Bill gives Jack: book, whip, ball and bat

Jack gives Bill: pen, toy and knife

The graph depicting the bargain from Nash’s paper is as follows:

Example: The solution point is on a rectangular hyperbola lying in the first quadrant and touching the set of alternatives at but one point

How Nash arrived at the result remains unclear. Nash’s close friend and co-editor of his 2002 autobiography The Essential John Nash, Harold Kuhn, recalls about the paper: “It is my recollection that it had been sent to von Neumann during Nash’s first year as a graduate student and that Nash made an appointment to remind von Neumann of its existence. In this scenario, it had been written at Carnegie Tech as a term paper in the only course in economics that Nash ever took.”, adding however that “Nash’s current memory differs from mine; in a luncheon with Roger Meyerson in 1995, he expressed the opinion that he had written the paper after his arrival at Princeton.

“Whatever the true history of the paper, the examples suggest that it was written by a teenager; they involve bats, balls, and penknives. What is certain is that Nash had never read the works of Cournot, Bowley, Tintner, and Fellner cited in the paper’s Introduction.” — Harold Kuhn

Meeting John von Neumann

Although somewhat in opposition to von Neumann and Morgenstern’s work on cooperative game theory, Nash’s results establishing a foundation for non-cooperative game theory clearly had its origins in the formers’ work (indeed, illustrative of this, in 1978 Nash was awarded the John von Neumann Theory Prize for his discovery of the Nash equilibrium).

Only one documented account of communication between Nash on von Neumann can now be found, although there were surely many more now lost to time. According to Nasar, Nash went to talk to von Neumann a few days after he passed his general examination at Princeton in 1949, prior to his definition of the Nash equilibrium. As she writes:

“He wanted, he had told the secretary cockily, to discuss an idea that might be of interest to Professor von Neumann. It was a rather audacious thing for a graduate student to do. [...] But it was typical of Nash, who had gone to see Einstein the year before with the germ of an idea. [...] He listened carefully, with his head cocked slightly to one side and his fingers tapping. Nash started to describe the proof he had in mind for an equilibrium in games of more than two players. But before he had gotten out more than a few disjointed sentences, von Neumann interrupted, jumped ahead to the yet unstated conclusion of Nash’s argument, and said abruptly, “That’s trivial, you know. That’s just a fixed point theorem.” - Excerpt, "A Beautiful Mind" by Sylvia Nasar (1998)

Von Neumann, in other words, did not see the value in Nash’s bargaining result. Nash himself however would later defend the great man’s reaction in a letter to Robert Leonard, stating, characteristically analytically, “I was playing a non-cooperative game in relation to von Neumann rather than simply seeking to join his coalition. And of course, it was psychologically natural for him not to be entirely pleased by a rival theoretical approach”. Both von Neumann and Morgenstern ultimately did however provide Nash with valuable guidance, and in the published version Nash makes sure to acknowledge the role of both, writing “The author wishes to acknowledge the assistance of Professors von Neumann and Morgenstern who read the original form of the paper and gave helpful advice as to the presentation.”

The Nash Equilibrium (1950)

A few days after his meeting with von Neumann, Nash reportedly again “accosted” David Gale on campus:

“I think I’ve found a way to generalize von Neumann’s min-max theorem,” he blurted out. “The fundamental idea is that in a two-person zero-sum solution, the best strategy for both is … The whole theory is built on it. And it works with any number of people and doesn’t have to be a zero-sum game!”

Characteristically, as Nasar writes, Gale was less enchanted by the possible applications of the work than the mathematics, stating in 1995 that “The mathematics was so beautiful. It was so right mathematically.”

“Gale realized that Nash’s idea applied to a far broader class of real-world situations than von Neumann’s notion of zero-sum games. “He had a concept that generalized to disarmament” - Excerpt, "A Beautiful Mind" by Sylvia Nasar (1998)

Gale also helped Nash claim credit for the result as soon as possible by drafting a note to the National Academy of Sciences. Lefschetz submitted the note on their behalf, and the result appeared in less than a single page entitled Equilibrium points in N-person games in the 36th volume of the Proceedings of the National Academy of Sciences in January of 1950.