Let’s start with a note on naming — I frankly don’t like the name selfish mining. All blockchain mining is selfish and is done to generate revenue. The correct and self-explanatory to what was termed ‘selfish mining’ would be ‘stealth mining’.

I believe Aviv Zohar drew the attention to the fact that using stealth mining you could generate a double spending attack. Basically what you do is you keep mining block aside from the main chain. As the rule in blockchain is the longest valid chain is regarded as the ledger, if your chain happens to be longer than the regular consensus chain then the transactions there will be accepted rather than the ones on the global chain. Say you have 25% of the mining power in the network, it could be within ~4096 10 minutes epochs -> Roughly 280 days that you will have such opportunity, that you will be 6 blocks ahead of the main chain. In that case, you could arrange a transfer of a large amount of bitcoin in exchange for USD. Then, wait as is the custom 6 epochs for the transaction to be considered accepted. Then, you draw the rabbit out of the hat, submit your stealth chain and override the main chain, obsoleting the bitcoin transfer done before. You run away with your USD and spend the rest of your life in Guatemala, for undisclosed reasons.

But how long will it take for you to fulfil this Get rich quick scheme ?

For this, I created a calculator for your use which can be found on Github

The interesting part is to explain the math behind this calculation. What you actually have here is a sort of a random walk. At probability p, p being your share of the total mining power, you manage to reach a block before the rest of the world does. At probability 1-p, they do before you. Then, if you reach 6, that is you managed to advance 6 blocks more than the rest of the world, you win. This is similar to the gambler’s ruin problem — how long will it take for a gambler with finite resources X to finish his resources gambling against the Casino. But there are a few differences. First and foremost whenever the rest of the world gains an advantage on you, you can reset to their chain. No harm done, you will start again. So the random walk can never go into the negative side of the axis. Given that, you can’t use random walk formulas in a straight forward way.

So, you build a recursive formula for the probability of being k steps away from the origin after n epochs of the random walk. We limit ourselves to k<=6. The formulas look like that:

F(n,k) = pF(n-1,k-1) + (1-p)F(n-1,k+1)

F(n,0)= (1-p)F(n-1,0) + (1-p)F(n-1,1)

There are other edge cases such as F(1,2) is obviously 0 — You can’t get to a distance of 2 from the origin with only one step.

Our ultimate goal would be to represent F(n,k) not as a recursive function depending on other values of F but as an explicit formula in terms of p.

Sadly enough, this is hard math.

But there’s a more straight-forward solution using dynamic programming. Instead of recursively executing the formula, resulting in an exponential complexity computation, you can look at the sieve created by F, and go from the lower values of (n,k) pairs upwards. It then becomes an operation of polynomial complexity.

By ways of Dynamic programming, you can calculate sieve values from lower n,k pairs upwards

I implemented this sieve computation in Python to assess for different values of p the amount of time needed to generate the attack in bitcoin mining epochs. Basically what I do is I have a Numpy array representing the coefficients of p for every n,k combination. I run inductively to calculate F for growing values of n. For memory reasons and simplicity I limit k value up to 6, and so the approximation being F(n,6) = pF(n-1,5) + pF(n-1,6) instead of extending into higher values of k. This approximation actually reduces the attack success probability, so you would expect a bit higher epochs estimation. I left the code open to extending it to higher values of k, say 12.

There are a few issues you run to pretty quickly when you implement it:

Once you have the sieve, how do you find the point for which the probability is good enough ? For this I used this nice Generic Binary search excerpt I found here.

Memory size. F’s coefficients grow linearly, so for F(10000,k) you would already have 10K coefficients, each of the maximal float size available so 16 Bytes. That’s 160KB for every point in the sieve, so given the sieve is about 60K points at that size, this could be estimated as up to 1GB of RAM … That’s a lot. What I did to tackle it was have a sliding window of the sieve. So, I have the first 200 n-values calculated. I check if the probability is good enough. If it is, I do a binary search on the current window. If not, I generate the next window of 200 n-values. The nice thing about the recursive formula is that the calculation for n depends only on the n-1 values. So, you actually don’t need to have everything in memory at once.

Floating point issues — you get there pretty quick. I just used numpy’s longdouble which is supposedly the most accurate available float without implementing my own version of float, which would be cumbersome.

Enjoy the Simulator !