BUIP 101 propose to set the default value of the max blocksize limit for the Bitcoin Unlimited client to 10 terabyte. The vast amount of data make fuses blow in the heads of great engineers and scientists. This must be madness, right?

10 terabyte blocks would provide a lot of transactions on the Bitcoin Cash network. Joannes Vermorel, CEO of Lokad, estimate it would give a capacity of 500 transactions per day for 10 billion people in this great talk .

But the hardware, software and network connections are far from processing this amount of data today. The next Bitcoin Unlimited client have been tested to work with 1746 transactions per second on medium range hardware. This is the same level as the VISA network process on average.





Why 10 terabyte?

10 terabyte is just a big number. It's 10 times Joannes Vermorel's vision, and it represents a removal of the blocksize limit while it's a tangible number at the same time. Is the number too big?

I believe the years of blocksize war have affected all of us. Myself and other investors, developers and miners all fought this weird war. There were a lot of debate with Core people, but their arguments never made any sense. At some point, bargaining begun. XT failed, and Bitcoin Classic offered the pathetic 2 megabyte. I fought hard for this solution. I was trapped in a bubble where I thought 2 megabyte blocks was enough, because the other side only offered half.

The whole conflict became really absurd when the miners asked Core devs for 2 megabyte blocks in the Hong Kong agreement in 2016. Why did they need to ask? They could have changed a "1" to a "2" in the source code and compiled it themselves, right?

The Bitcoin Cash community still carry the mental wounds from this war. The fear of too big blocks is still in our hearts. But it's totally irrational, and I'll explain why.





3 scenarios of too big blocks

Long story short, there's nothing to fear but fear itself. Here are the three scenarios where the lack of a blocksize limit can cause trouble for some people:

1) The Powers That Be try to destroy bitcoin (BCH) by mining a block so large that a node accepting it will run out of memory. This is very cheap for those interests. In fact, they will probably just buy a well connected pool for this attack. By doing this, they will be able to send the monsterblock to all the pools and take them all down at the same time...

...for a few minutes, max two hours. But mining is done by people, not computers. Computers don't care, but miners certainly do. They will stop being connected to this rogue pool that feeds them with poison blocks and prevents them from making money. The monsterblock doesn't propagate at all over the network, so you have to be well connected for this one-off attack to work.

Since this attack is unsustainable and will not cause much harm to the honeybadger, it will probably never even happen.

2) A mining pool with the best hardware, software and network connection makes a very big block to collect all the fees from the transactions.

But over 50% of the network can't handle this load. The pool's block is orphaned, and the pool operator learns a lesson: Don't create bigger blocks than the majority of the hashpower can handle.

3) A mining pool with the best hardware, software and network connection makes a very big block to collect all the fees from the transactions.

But less than 50% of the network can't handle this load. So they are orphaned. The weak pools learn a lesson: Keep up with the current demand for hardware, software and network connection if you want to run a pool as a business.

I find this quote inspiring:

Yes, let’s eliminate the limit. Nothing bad will happen if we do. And if I’m wrong, the bad things would be mild annoyances, not existential risks, much less risky than operating a network near 100% capacity. Gavin Andresen

So now we have established that the max blocksize limit is not very important to protect the system as a whole or to protect miners from orphans. In fact, miners have done a very good job with adjusting up the "soft limit", the limit of how large blocks they produce, to match the demand for blockspace over the years.





Is BUIP 101 removing the max blocksize limit?

No. Not at all. Bitcoin Unlimited was founded on a principle of letting the miners and non-mining node operators run their own businesses and communicating to other people what they want. The parameters of how big blocks you produce (MG) , how big blocks you accept (EB) and how many blocks you would stick to your EB policy before you cave in to the longest chain of proof of work (AD) are easy to configure by individual mining pools and is communicated to the rest of the network. BUIP 101 is only related to the default EB value in the software out of the box. The node operator can increase or decrease this value and broadcast his preferences to the world. That's all.





If it's just a configurable default value, why is it important?

The default value is important because it's a message from the developers to the mining pools: This is the setting we recommend for the software we made.

Not many people are aware of this in the heated debate about the november fork, Bitcoin Unlimited, Bitcoin ABC and Satoshi's Vision. But all of these clients have configurable settings for the max blocksize they accept.

The 32MB from ABC and the 128MB from SV are just default settings for the clients,

Given the history of how miners handled Core, it's clear to me that the miners have listened too much to the developers and hurt their own business while they didn't have to. I think the reason was the decentralized nature of mining and the central role Core had. It was a natural choice to just listen to Core instead of your competitors.

But this is a technocracy miners should break free from. Miners compete like crazy when it comes to SHA256. The developments in mining the last years have been insane. They should also compete with nodes and verification of transactions. The absolutely worst situation we can create is one where developers are babysitting the weakest mining pools and tell everybody to slow down the transaction capacity to make it possible for the laggards to participate without any effort. This attitude will just drag everything down.





The Fidelity Problem

Jeff Garzik gave this talk in 2015 where he spoke about how big companies stay out of bitcoin because they can't really know what the future transaction capacity of the network will be. The general opinion today is that "blockchains don't scale", and this is a very big problem we have to fight, along with the problem of a myriad of altcoins with noticeable value.

Our approach can not be "If they come, we will build it" . It has to be the other way around.





What's the alternative to BUIP 101?

I don't know. If the proposal doesn't get a majority vote, I believe it's in the hands of the Bitcoin Unlimited developers. I asked the president of BU, Andrew Clifford AKA Solex, what he thought the default value of the max blocksize limit should be, and he gave me this answer:

I think the optimum default setting for BU block acceptance (EB) is the minimum of other full node implementations currently or imminently being used for significant mining hashrate. This means min (ABC, SV) hard-limits, i.e. 32MB today.

Bitcoin Unlimited is the most awesome developer group and organization in this space. We have kept the intellectual and critical debate alive on our own forum when social media manipulation has torn reason and logic to pieces on almost all other platforms. BU members have organized conferences all over the globe with the most relevant content in a world full of cheap ICO's and private blockchains.

The Gigablock Testnet Initiative demanded a lot of time from many people to get started and running. But it delivered in a spectacular way. The first bottlenecks for scaling was identified. And Andrew Stone fixed them. AFAIK, the BU nodes were the ones that worked flawlessly during the stress test a couple of months ago. And now, the fruits of the Gigablock Testnet Initiative are merged into the code with a tested performance on medium range hardware at - VISA level - 1746 transactions per second! 32 MB blocks can only handle 100 transactions per second.

So no, Solex. On this specific topic I disagree with you. We should not subsidize the weaker software of ABC. We should not drag ourselves down to their amateur level.