A while ago, the popular data journalism site 538 posted a challenging probability puzzle:

On the table in front of you are two coins. They look and feel identical, but you know one of them has been doctored. The fair coin comes up heads half the time while the doctored coin comes up heads 60 percent of the time. How many flips — you must flip both coins at once, one with each hand — would you need to give yourself a 95 percent chance of correctly identifying the doctored coin?

The question proved so difficult, in fact, that 538’s talented puzzle master Oliver Roeder gave an incorrect answer. Someone smarter than me noticed this, and then we worked together to verify with r/statistics and notify those who may have cared (but didn’t), including the authors of a paper Oliver cited. My question to Reddit contains R code that traces exactly where they went wrong.

This kind of puzzle is a classic in statistics text books because it uses a trivial problem - flipping coins - as an example of more meaningful questions. The solutions are often given by comparing hypothetical distributions.

But I wanted to think of this less abstractly: what if you really were sitting in front of two coins, knowing that one has a slight bias? What would be the most efficient way to find out and how long would it take?