I want to talk about 2 important things that programmers sometimes overlook: Algorithms and Specifications and I will use the DAA implementation as a real example.

For 4 years now I have maintained a multi-currency node (BCH, BTC and LTC) called Knuth (a.k.a. Bitprim).

In November 2017 Bitcoin Cash made its first protocol change after its birth in August of the same year. My job at that time was to update the code of our node to support the protocol changes.

The most important change of the Nov 2017 HF was the Difficulty Adjustment Algorithm, from now DAA.

Here the description of the algorithm.

I do not want to go into detail about the concept of difficulty nor check if DAA is the best algorithm for adjusting difficulty, but I want to review how it was implemented and specified.

Main interests for me are points 2 and 3 of the description of the DAA:

2. Let B_last be chosen[2] from [B_n-2, B_n-1, B_n]. 3. Let B_first be chosen[2] from [B_n-146, B_n-145, B_n-144].

Both pointing to the footnote [2]:

2. A block is chosen via the following mechanism: Given a list: S = [B_n-2, B_n-1, B_n] a. If timestamp(S[0]) greater than timestamp(S[2]) then swap S[0] and S[2]. b. If timestamp(S[0]) greater than timestamp(S[1]) then swap S[0] and S[1]. c. If timestamp(S[1]) greater than timestamp(S[2]) then swap S[1] and S[2]. d. Return S[1]. See GetSuitableBlock

The specification of the algorithm points to its implementation, in a function called GetSuitableBlock. Here the code:

/** * To reduce the impact of timestamp manipulation, we select the block we are * basing our computation on via a median of 3. */ static const CBlockIndex *GetSuitableBlock(const CBlockIndex *pindex) { assert(pindex->nHeight >= 3); /** * In order to avoid a block is a very skewed timestamp to have too much * influence, we select the median of the 3 top most blocks as a starting * point. */ const CBlockIndex *blocks[3]; blocks[2] = pindex; blocks[1] = pindex->pprev; blocks[0] = blocks[1]->pprev; // Sorting network. if (blocks[0]->nTime > blocks[2]->nTime) { std::swap(blocks[0], blocks[2]); } if (blocks[0]->nTime > blocks[1]->nTime) { std::swap(blocks[0], blocks[1]); } if (blocks[1]->nTime > blocks[2]->nTime) { std::swap(blocks[1], blocks[2]); } // We should have our candidate in the middle now. return blocks[1]; }

The algorithm basically creates a sequence, then it sorts and returns the second element.

The complexity in time of this algorithm is:

Best case: 0 swaps, 3 comparisons

Worst case: 2 swaps, 3 comparisons

Average case: 7/6 swaps, 3 comparisons; assuming a uniform distribution of the input data. (*)

Now, look again at the algorithm. An array is being created (using the input data), then it is sorted and the middle element is returned. This is a known algorithm and is called median, in particular, median of 3 elements.

The median is a selection algorithm. Unlike sorting (inplace sorting) algorithms, it is assumed that selection algorithms should not modify the input data, they just have to return one of the element of the input sequence.

Here a sketch of how the median of 3 algorithm should be correctly implemented in C++:

template <TotallyOrdered T> auto median_3_ab(T const& a, T const& b, T const& c) { // precondition: a <= b return ! (c < b) ? b : // a, b, c are sorted max(a, c); // b is not the median } template <TotallyOrdered T> auto median_3(T const& a, T const& b, T const& c) { return b < a ? median_3_ab(b, a, c) : median_3_ab(a, b, c); }

(I use PascalCase for naming C++ Concepts, like TotallyOrdered, same convention as in EoP)

I leave the analysis of the code for the reader, for the lazy: what the algorithm does is simply select the middle element between a , b and c , pretending that the 3 were sorted in ascending order. It does this without mutating or reordering the input data.

The time complexity of median_3 is:

Best case: 0 swaps, 2 comparisons

Worst case: 0 swaps, 3 comparisons

Average case: 0 swaps, 8/3 comparisons; assuming a uniform distribution of the input data.

Now, we could use our new algorithm in the original GetSuitableBlock function:

static CBlockIndex const* GetSuitableBlockNewVersion(CBlockIndex const* pindex) { assert(pindex->nHeight >= 3); return &median_3(*pindex->pprev->pprev, *pindex->pprev, *pindex); }

Before continuing, we have to fix something:

We do not know how the natural ordering is specified in the CBlockIndex class, so we need to sort by the timestamp of the block ( nTime attribute).

We need a version of median_3 that accepts a form of comparison specified by the user: we need a function to accept a strict weak ordering relation (for more information see here). So:

template <Regular T, StrictWeakOrdering R> auto median_3_ab(T const& a, T const& b, T const& c, R r) { // precondition: a <= b return ! r(c, b) ? b : // a, b, c are sorted max(a, c, r); // b is not the median } template <Regular T, StrictWeakOrdering R> auto median_3(T const& a, T const& b, T const& c, R r) { return r(b, a) ? median_3_ab(b, a, c, r) : median_3_ab(a, b, c, r); }

Now, we can correctly implement GetSuitableBlockNewVersion , comparing by nTime :

static CBlockIndex const* GetSuitableBlockNewVersion(CBlockIndex const* pindex) { assert(pindex->nHeight >= 3); return &median_3(*pindex->pprev->pprev, *pindex->pprev, *pindex, [](auto const& a, auto const& b){ return a.nTime < b.nTime; }); }

Well, I think we are ready! hmm... wait...

Let's check one more thing:

struct CBlockIndex { size_t nHeight; size_t nTime; CBlockIndex* pprev; }; int main() { CBlockIndex ba {1, 1558731501, nullptr}; CBlockIndex bb {2, 1558731501, &ba}; //note, same nTime as previous one CBlockIndex bc {3, 1558731500, &bb}; auto r = GetSuitableBlockNewVersion(&bc); cout << "GetSuitableBlockNewVersion: " << r->nHeight << endl; r = GetSuitableBlock(&bc); cout << "GetSuitableBlock: " << r->nHeight << endl; }

The code above prints:

GetSuitableBlockNewVersion: 1 GetSuitableBlock: 2

With the previous example, I am trying to prove the stability of both algorithms. Our median_3 algorithm is stable which means that the relative order of the equivalent elements is preserved (for more information see here).

Let's see it in another way, given the following sequence, called s :

s = [{1, 1558731501}, {2, 1558731501}, {3, 1558731500}]

Where the first element of each pair is nHeight , and the second is the nTime . Note, again, that the nTime of the first 2 elements is the same.

If we sort the s by nTime using a stable sorting algorithm, such as Merge sort we will get the following result:

s = [{3, 1558731500}, {1, 1558731501}, {2, 1558731501}]

Note that the middle element is the one with nHeight = 1 . This indicates that our algorithm implementation, GetSuitableBlockNewVersion , behaved in a stable manner while the algorithm included in the DAA description, GetSuitableBlock , is unstable.

When I was implementing DAA on my node, I used the stable median of 3 algorithm, similar to the median_3 above, since I had not thoroughly verified the GetSuitableBlock code, I mistakenly assumed it was a stable algorithm.

So, our node stopped being compatible with Bitcoin ABC consensus rules. Luckily, I was able to detect the divergence between both algorithms, then, I had to change my algorithm to make it unstable, in the same way as that of Bitcoin ABC implementation

Actually, if I remember correctly, the first version of the DAA description (which I relied on to implement it on my node, released just a few days before the HF) did not mention the GetSuitableBlock code, but said that the median of 3 elements was calculated. Since Bitcoin ABC implementation of the median algorithm was "incorrect", then they had to adapt the "specification" to be consistent with the code.

So, the author of the DAA code could have chosen a known and “standard” algorithm, but he did not. And perhaps the worst of all things is that they had to fix the DAA specification pointing to the code.

The code must never be the specification. The code must be written from a specification. So, if a specification refers to the code, it cannot be considered as such.

To conclude a comparison of both algorithms, GetSuitableBlock vs. median_3 :

median_3 does not make any swap, GetSuitableBlock can make 0, 7/6 or 2 swaps, unnecessarily. (Efficiency)

GetSuitableBlock creates an array, unnecessarily. (Efficiency)

median_3 performs 2, 8/3 or 3 comparisons, GetSuitableBlock always performs 3 comparisons. (Efficiency)

median_3 is stable, GetSuitableBlock is not.

median_3 is what anyone expects from an algorithm that calculates the median of 3 elements. (Correctness)

So guys, programming gods do not exist.

Bye!

Just a clarification:

When I compare both algorithms, I am not trying to optimize prematurely, but to avoid using ugly code when a much more beautiful code can be used, and I am also trying to avoid getting pessimized prematurely.

Avoiding premature optimization does not imply gratuitously hurting efficiency or code beauty.

In the following articles we will:

- Analyze how the other nodes have implemented the DAA.

- (*) Count number of comparisons and swaps using real data (BCH Blockchain).

A.A. Stepanov and P. McJones. Elements of Programming. Addison-Wesley Professional, 2009.

D.E. Knuth. The Art of Computer Programming Volume 3: Sorting and Searching, 1998.