Thank you everyone who took the survey. It was fun to see how it all panned out. If you haven’t taken the survey, please do (link). I changed some things to make it better. It’s short, most of the questions are games, and I think this write up will be more interesting if you take the survey before you read. For me, of course, more data is better.

As you read this, please try to think of other questions I could have asked, or better ways I could have answered mine. If you want to take a crack at answering some questions for yourself, the data is here. Let me know what you find!

Background

I presented survey-takers with 5 games that, to win, required different degrees of mathematical ability, ability to guess other people’s answers and luck. The first question asked respondents to pick a number between 0 and 100 (inclusive) that they believe will be closest to 3/4th the average of all numbers guessed for this question. This is a classic intro-to-game-theory question. Some people might anticipate that others will guess numbers randomly, making the average come out to 50, making 3/4ths the average 37 or 38. Some other people will take it a step further and guess that most people will pick 37 or 38, and therefore guess 3/4th of that, which is 28. It goes on and on like this such that someone who is thinking n steps ahead will guess 50*0.75^n — until you hit a number that is basically 0. If you were to play this game with only one other person, and you picked 0, you could not lose. (As we go along, I’ll call this the “Nash number,” even though it is not technically a pure strategy Nash Equilibrium.) When playing with lots of people, as in this survey, however, a better strategy might be guessing what proportion of people are “n-step players.” If everyone were 2-step players, for instance, they would all pick 28, and you could win by picking 21.

The fifth game, which asks respondents to pick a number 1 greater than average of all the numbers picked, follows similar principles, with n-step players choosing 10 + n. I asked both questions in part because I was curious whether people played consistently at a particular step.

The second game asks players to choose the number that they thought would be furthest from the average of all numbers picked for that question. This game interested me because, regardless of how you thought other people would play, there are only two possible winning answers: 0 and 100. (The number furthest from any given number on a segment of the number line will always be at one of the end points of that segment.) After realizing that the answer is going to be 0 or 100, picking the winning response is basically a coin flip.

The third and fourth questions asked players to pick the most and least frequently occurring responses respectively. Winning these games is all about anticipating the behavior of others.

I posted this survey on both /r/SampleSize and /r/GameTheory, from which I received 192 and 33 responses. As a screening question, I asked respondents to calculate a simple average. While this was intended to identify both bots and people who didn’t understand the games, the majority of incorrect responses had clues that they came from bots, advertising Bitcoin or filling the comments box by typing “Bush did 9/11” repeatedly. (It’s the magic of the Internet that I can’t be sure whether some human did this.) Instead of cherry-picking, I drop the people who answered the average question incorrectly. This left me with 168 and 30 respondents, respectively. (I also didn’t get anyone who answered on Sunday, sorry guys.)

At the end of the survey, I asked players to provide a player name. I will be referencing these throughout.

The Scores

Congratulations to Adenine, CatWithHands, ejoran, Jim Burton, Polot38 and Sydanta! You all won with a score of 3 points. Adenine, Flea, KillerKiss and Jyan won restricting the competition to just /r/SampleSize. If we look only at /r/GameTheory, then Polot38 and ejoran are still the victors, but are joined by Danskerfisken, who gains a point. Quarter_Twenty wrote that they were “twice as smart as half of me” and got 1 point. I can live with that.

As stated on the survey, one point was awarded for each correctly-answered question. A complete list of points by player name is here. The distribution of scores was as follows.

Distribution of Points Earned

The First and Fifth Games

Here is how the guessed broke down:

Distribution of Guesses in the First Game

The dashed vertical line is 3/4th of the mean. The bars in gold represent the approximate values of the expression 50*0.75^n for all values of n.

The winners for the first game chose 30, which, whether they employed this strategy of not, is roughly the answer that a 2-step thinker would give. When I saw this distribution, I thought the question might have been confusing because of the large spikes at 50 (the average) and 75 (3/4th of the maximum). As many people guessed 69 as guessed 0 (the “Nash number”).

Players from /r/GameTheory guessed about 10 numbers lower on average, but they also had a higher number of people (presumably trolling) who guessed near 100.

Distribution of Guesses in the Fifth Game

The dashed vertical line is at the mean plus one. The bars in gold represent the approximate values of the expression 10+n for all values of n.

As you can see, people were much more likely to guess the Nash number for this game. Perhaps I asked the question better, or it was easier to figure out what adding one to the mean would do. Maybe it’s because 69 wasn’t an option.

People from /r/GameTheory again guessed values closer to the Nash number. Their average guess was 15, whereas /r/SampleSize’s was 13. This two-point difference on a twenty-point scale is basically equivalent to the ten-point difference in the first game.

As I mentioned in the Background section, I was interested in seeing whether respondents employed consistent strategies in these games. My first cut at this question was using the n-step framework, but I immediately ran into trouble because of the difficulty of the math in the first game (50 times 3/4? Eh, I’ll just put 40). I tried to get around this by assuming guesses within (50*0.75^n)+/-3 were n-step guesses. Then what to do about the guesses that don’t fall cleanly on a step? Total mess. I’ll share this quickly just in case smarter people than me get anything out of the finer detail, and then we’ll move on.

Strategic Behavior (Take One)

Each cell in the primary table should be interpreted as the percentage of respondents who chose r in Game 1 who then chose c in Game 5. The separated row and column show the percentage of respondents who chose a strategy in each game. I won’t get into the categories, but they are mutually exclusive.

In my next attempt I binned the data into larger categories based on particular strategies that I imagined people might have. If you want to skip this block of text, the mapping of my strategy bins to the range of possible numbers in each game is summarized in the table below. “Unlikely” strategies (as in, unlikely to succeed) are, for Game 1, all guesses above what the average would be for uniformly distributed guesses (i.e. above 50). The “Step 0” strategy for Game 1 is simply this average, reflecting the belief that guesses will be uniformly distributed. The “Nash” strategy picking the Nash number. The “Nash Fudge” Strategy is picking numbers near the Nash number, reflecting the belief that most people will guess at or near the Nash number, but a handful will not. The “Plausible” strategy lies between the Step 0 and Nash Fudge strategies, perhaps reflects the belief that strategic behavior will pull people towards the Nash number, but not all that close. (The winning pick fell in what, a priori, I called the “Plausible” range for both games.)

Strategy Bin Definitions

Ranges are inclusive

Now we get a table that looks like this:

Strategic Behavior (Take 2)

Each cell in the primary table should be interpreted as the percentage of respondents who chose r in Game 1 who then chose c in Game 5. The separated row and column show the percentage of respondents who chose a strategy in each game. I won’t get into the categories, but they are mutually exclusive.

Did respondents use a consistent strategy from Game 1 to Game 5. The short answer seems to be: “not most of them.” Like we saw in the figures above, respondents generally guessed more Nash-like answers in Game 5. From this table we can infer that, if someone guessed the Nash number in Game 1, they had a 50% chance of guessing that number the second time around.

Is that a lot? Part of the issue with this table is it give no indication of whether we are seeing actual strategy or random variation. To get at that, I have a follow-up table that shows the percent difference between the percentages in the table above and the percentages one would expect if the numbers in each game were chosen completely independently of one another.

Deliberateness Test

See preceding text

This table is the most interesting one to me. It shows, for instance, that people really liked picking 50 in Game 1 for some reason. And while this is no formal significance test, it doesn’t seem impossible that the people guessing between 0 and 50 for Game 1 were just picking a number below 50 at random. People who picked the Nash number both times really seemed to have a game plan. I won’t read off the whole table for you; you get the idea.

Of course, this method only allows me to look for the presence of a very particular set of strategies. Ideally, I would have like to do something where the data told me what trends were there. Unfortunately, I know next to nothing about data analysis. If some kind, informed reader would like to suggest something in the comments, I’d be interested to learn.

The Second Game

Whew, I didn’t expect things to get this long. I just started typing — but now, here we are at the second game, where only 0 and 100 could win. This is how things worked out:

Distribution of Guesses in the Second Game

The dashed vertical line shows the mean

Many people, roughly 44% of respondents from /r/SampleSize and 72% from /r/GameTheory, realized that only 0 or 100 could win. I suspect that if I were casually taking a survey on Reddit, I would not have caught on. The surprising thing to me, however, is the number of people who picked number close to, but not at the ends. They seemed to have figured out that being towards the ends was better without realizing that the end was best? Perhaps people thought that their cumulative score would be calculated based on how far away they were from each correct answer. (Sorry about the confusion.) That might also explain the people who said 50. This is the histogram of responses.

Self-assessed math skill was not a particularly good indicator of who would choose one of the poles, although it was not uncorrelated.

Percentage of “Correct” Answers by Self-Assessed Math Skill

Respondents were asked to rate themselves on their mathematical ability with 1 meaning “Terrible” and 6 meaning “Excellent.” The size of the dots represents the percentage of people who placed themselves in each category. A “correct” answer is defined as one that has a mathematical chance of winning.

The take-away from this figure is that roughly 30% of people who believe themselves to be excellent at math did not choose one of the two mathematically possible answers. Is it bad luck? Hubris? Or are they overthinking it? (Eg. considering a notion of distance in which 0 and 100 are adjacent.) Shout out to the one respondent who said they were terrible at math, but chose 0 for this question. You stay humble Ramanujan!

The Third Game

In attempting to choose the most commonly selected number, Supersalsa wrote “Reddit better not let me down with its 69 immaturity.” Surprisingly for me — but I guess not for a plurality of respondents — it did.

As the histogram below shows, 50 was the most commonly selected number, representing about 27% of all entries. Sixty-nine came in second with 20% of the vote, and 42 came in third at about 8.6%. Although 42 is the answer to life, the universe, and everything, it gets you no points on this question.

Distribution of Guesses in the Third Game

Getting the correct answer on this question seems to be uncorrelated with self-assessed skill at predicting the behavior of others (which I’ll loosely call “People Skills.” Another take-away from the figure below is that Reddit is much less confident in its people skills than its math skills.

Percentage of Correct Answers by Self-Assessed People Skills

Respondents were asked to rate themselves on their ability to predict the behavior of others with 1 meaning “Terrible” and 6 meaning “Excellent.” The size of the dots represents the percentage of people who placed themselves in each category.

The Fourth Game

Anyone who has ever had to pick a number to decide who gets the last slice of pizza has their own theories about how to people pick “random” numbers. If respondents were impossibly good at this game, then the distribution of numbers picked would approximate a uniform distribution. With 198 respondents, this would have resulted in only 2 winners.

In reality, there were 33 winners. Lots of people picked 73 and 1. Again (although it is not shown), self-assessed people skills were uncorrelated with the ability to select and infrequently occurring number.

Distribution of Guesses in the Fourth Game