Towards Massive On-Chain Scaling: Presenting Our Block Propagation Results With Xthin

Part 1 of 5: Methodology

By Andrew Clifford, Peter R. Rizun, Andrea Suisani (@sickpig), Andrew Stone and Peter Tschipper. With special thanks to Jihan Wu from AntPool for the block source and to @cypherdoc and our other generous donors for the funds to pay for our nodes in Mainland China.

The fracturing of Bitcoin development into several competing implementations bore fruit on 16 March 2016 with the release of Bitcoin Unlimited 0.12. Contained within this release was a new technology called Xtreme Thinblocks, or Xthin for short. Xthin fixed a longstanding inefficiency from Bitcoin Core where transactions were often received twice by each node. Nodes supporting Xthin can propagate blocks using fewer bytes and in less time than nodes that rely on standard block propagation.

Fig. 1. Standard versus Xthin block propagation. Xthin fixes an inefficiency that exists in Bitcoin Core that results in transactions often being received twice by each node: once when the transaction is first broadcast by a user to the peer-to-peer network, and again when a solved block containing the now-confirmed transaction is found by a miner. Rather than requesting the block verbatim, an Xthin-equipped node images its mempool onto a Bloom filter that its sends with its “get data” request; the transmitting node sends the block contents by hash for all the transactions imaged onto the Bloom filter and in full otherwise. In the unlikely event that the receiving node is unable to reconstruct the block, it requests the still-missing transactions, resulting in a second round trip between the nodes. For more information, please watch this video.

The motivation behind Xthin is clear: as argued by Cornell researchers, block propagation between nodes is the bottleneck for on-chain scaling. Of particular concern is the propagation of blocks over the Great Firewall of China (GFC), which Jonathan Toomim reported is an order of magnitude slower than between nodes connected across the normal P2P network. Xthin is designed to address these issues.

For the past two months, we have been collecting empirical data regarding block propagation with and without Xthin — both across the normal P2P network and over the GFC. We have six Bitcoin Unlimited (BU) nodes running, including one located in Shenzhen and another in Shanghai, and we have collected data on the transmission and reception for over nine thousand blocks.

This post is part 1 of a 5 part series. It will describe our experiment’s methodology. Part 2 — coming later this week — will show how Xthin blocks are significantly faster than standard blocks, while Part 3 will illustrate how Xthin blocks are less affected by the GFC. Part 4 will summarize the bandwidth savings that result from using Xthin, and Part 5 will conclude the series.

Methodology

The two variables of interest that we measured were the number of bytes and the length of time required to communicate a block.

In the case of a Xthin block, the number of bytes was measured by summing the Bloom filter size and the thin block size. In the case of a standard block, the uncompressed block size was measured.

Fig. 2. The number of bytes required to propagate a block was measured by summing the Bloom filter size and the thin-block size for blocks transmitted with the Xthin technique, and taken as the uncompressed block size for blocks transmitted using the standard technique.

The length of time required to communicate a block was measured by setting a timer immediately after the receiving node received notification that a new block was available (i.e., after it received the “inv” message) and stopping the timer when that block had been fully received and reconstructed. This applied for both standard and Xthin blocks. Starting the timer immediately after the “inv” message ensured that the time it took to construct the Bloom filter was included in the measurement.