The blocksize limit has been an Achilles Heel in Bitcoin. It was the very thing that led to the split between BTC and BCH. Currently, we have 32mb and plenty of room, but some people think we should remove the blocksize limit altogether and just let miners decide how big blocks we should have.

Philosophically, this makes some sense. I have thought this for a long time and made posts about this same idea several years ago on the Bitcointalk forum. But the devil is in the details.

Rewinding back to the blocksize debate: In those days, changing the consensus rules was (and still is) very difficult on BTC. By contrast, on BCH, we have hard fork upgrades every 6 months.

So at the time, given the overall culture in Bitcoin was to not make any changes without overwhelming consensus, it did not make sense for the blocksize limit (which obviously needed raising) to be part of the consensus layer at all.

BUIP001

As the block size debate reached a climax, Emergent Consensus was formed as an official proposal (BUIP001) in the Bitcoin Unlimited software .

This attempts to codify into software a mechanism for letting miners choose their own blocksize. BUIP001 lets miners choose an "EB" (Excessive block) size at which they will reject blocks that are too big, and also an "AD" (Acceptance Depth), which is the number of confirmations before they will capitulate and join the longest chain even if their EB is exceeded.

The problem is that this scheme doesn't always result in consensus. It can split the chain if one group of miners has too long of an AD...and it also can result in truly excessive blocks if most miners choose a short AD and some mining group forces huge blocks through.

Blindly setting BUIP001 preferences and letting the software do its thing is not sufficient. Emergent Consensus as an IDEA instead seems to rely on communication and coordination.

Perhaps Emergent Consensus was always intended only as an emergency measure to get us through the impasse. I am not sure if it makes sense as an ongoing practice, at least without adding some manual coordination.

Let's take a look at a few more well-known proposals and what the issues may be with those as well.

BIP100

BIP100 was proposed by Jeff Garzik and creates a dynamic blocksize based on miner voting. Miners signal each block and essentially a 3 month average is used. One problem with this proposal is that a large contingent of miners could choose to vote for extremely large or extremely small blocks. The proposal attempts to optimize around this problem by dropping the top 20% and bottom 20%.

This sounds like a decent approach at least in spirit, although it would probably need to be made a little more sophisticated than a simple average because it could be adversely gamed by 21% of the miners who could set extreme values which would then be used in the average.

In other words, if 21% of the miners voted for a block size of zero, and 20% were discarded, then you still have 1% voting zero, which makes the average of all miners zero. The proposal has a maximum increase/decrease of 2x, but still the problem is that 21% of the miners can force the blocks to be smaller... or on the flip side, to become exponentially large.

BIP101

BIP101 was proposed by Gavin Andresen and increases the blocksize continuously based on the timestamps of the blocks. The rate of growth allows the block size to double every two years.

The problem with this one is that while it provides a predictable rate of growth, we really do not know ahead of time if that rate will be appropriate. It could be insufficient or too rapid. The same would be true for variations of the plan with different rates, since we don't know how fast we really should increase it.

The actual appropriate rate to increase the blocks should be at least partly based on what the network can handle, which is determined by future technological software and hardware advances, some of which are unpredictable.

Dynamic Block Size

Another idea that some have talked about (although I'm unsure if there's ever been a formal proposal) is to have an algorithmic block size limit based on actual usage. For example, the blocksize limit could be the greater of either: 32MB, or the average of the last 1000 blocks times 10.

This is just off the top of my head; These kinds of schemes need peer review and community feedback. There may be still some excessive block type attacks.

A fundamental questions in all this is: what should really determine maximum blocksizes? Is it the technical limitations? The usage? The desire of miners? All of the above? And if so, in what combination? Or are there other factors not considered yet?