Thinblock Relay Network:

1. Node A creates a bloom filter seeded with the contents of its memory pool.

2. Node A sends the bloom filter along with a getdata request to Node B.

3. Node B sends back a "thinblock" transaction which contains the block header information, all the transaction hashes that were contained in the block, and any transactions that do not match the bloom filter which Node A had sent.

4. Node A receives the "thinblock" and reconstructs the block using transactions that exist from its own memory pool as well as the transactions that were supplied in the thinblock.​

5. If there are still any transactions missing then a "CThinBlockTx" transaction is sent from Node A. This contains a map of the missing transaction hashes seeded with null tx's.

6. Node B upon receiving the “CThinBlockTx” request will take the object and fill in the transaction blanks, getting the transactions from the block on disk rather than memory (in this way we can be sure the transactions can be accessed as they may already have been purged from memory or they may have been unannounced). Once the blanks are filled in the object is sent back to Node A and the block finally reconstructed.​

⦁ A new protocol version number​

⦁ If the thinblocks feature is turned off then thinblocks will not be downloaded but requests for thinblocks will still be serviced.​

⦁ The coinbase transaction will always be included in the thinblock, as it was discovered by “dagurval” from the XT project, that this was the most common transaction missing, roughly 80% of the time.​

⦁ During startup when the memory pool has few transactions in it, or when a block is very small and has only 1 or 2 transactions a thinblock may end up being larger than the regular block. In that case a regular block will be returned to the requestor instead of a thinblock. This typically happens when a new block is mined just seconds after the previous one.​

⦁ Bloom Size Decay algorithm: A useful phenomena occurs as the memory pools grow and get closer in sync; the bloom filter can be allowed to become less sparse. That means more false positives but because the memory pool has been “warmed up” there is now a very low likelihood of missing a transaction. This bears out in practice and a simple linear decay algorithm was developed which alters both the number of elements and the false positive rate. However, not knowing how far out of sync our pools are in practice means we can not calculate the with certainty the probability of a false positive and a memory pool miss which will result in a re-requested transaction, so we need to be careful in not cutting too fine a line. Using this approach significantly reduces the needed size of the bloom filter by 50%.​

A 64 bit transaction hash is used instead of the full 256 bit hash to further reduce thinblock size while still preventing hash collisions in the memory pool

connect-thinblock=<ip>​

Test Setup:

use-thinblocks=0​



Datastream compression: Testing various compression libraries such as LZO-1x and Zlib have shown it is possible to further reduce block and transaction sizes by 20 to 30% without affecting response times which could also be applied to thinblocks.



Bloom Filters: There is further research that could be done to further reduce the size of Bloom Filters by either by compressing sparse filters or researching a better decay algorithm.



Tx Hashes: Further work could be done to reduce the size of the tx hashes and make a smaller thinblock. Testing various compression libraries such as LZO-1x and Zlib have shown it is possible to further reduce block and transaction sizes by 20 to 30% without affecting response times which could also be applied to thinblocks.There is further research that could be done to further reduce the size of Bloom Filters by either by compressing sparse filters or researching a better decay algorithm.Further work could be done to reduce the size of the tx hashes and make a smaller thinblock. @YarkoL has come up with a framework that could be implemented in a phase 2.​