A big part of the debates going on in the BCH community these days are surrounding the 2 different philosophies of proposed changes being slated for the Nov fork upgrade. Of course Nakamoto consensus will decide in the end, which means the chain with the most hashpower will be considered the true chain, and any minority chain will eventually be abandoned. But leading up to that event, node operators and miners will need to choose which of the clients to support in order to determine which chain will be the majority chain and which will be the minority chain.

In this there are many discussions and debates. Most of them can be read in detail in the summary post here

I won't add to the debate on OP_DSV, or CTOR, or the old disabled opcodes, I think there has been enough discussion about those by much smarter people so that people can make up their own minds. I have also touched on some of these topics in the past myself. This time however, I would like to talk more about the sticking point for many, which really shouldn't be a issue, the increase of the blockcap to 128mb.

The argument of why we shouldn't increase the blockcap hard limit to 128mb

Basically revolves around the fact that the limit isn't technically reachable as yet. The recent stress tests showed that the maximum blocksize that the network was able to produce was about 21mb. (this incidentally, coincides almost to exactly what Gavin Andresen predicted back in 2015 when the whole big block movement started and he was discounted by the then core developers of Bitcoin legacy. -- So Core devs of BTC have been objectively and empirically proven wrong ). So, the proponents of BitcoinABC say that raising the limit from 32mb to 128mb will have no practical effect whatsoever.

...and they may be right. IF and only IF they are considering the technical aspects. But as readers of the WST know, developers and technical people in general tend to think of absolutes, and only consider things in a certain dimension, and almost always discount the 'soft' realities of real world economies of human interactions. This myopia is through no fault of their own. Folks who come only from a pure technical discipline often discount factors that they cannot quantify or model. And human psychology is often one of the hardest things to rationalize. Unfortunately, it is these subtle factors that often drive the ultimate direction of the market, much like the original flutter of a butterflies wings which determined whether or not a river runs inland or out to sea. (Chaos theory).

The argument of why we should increase the blockcap hard limit to 128mb

The aspect that the anti-128mb folks miss is the fact that the blocksize is purely psychological. As ABC developer Amaury Sechet himself pointed out at the Bangkok meetings, the real block limit is the point at which your node will crash due to running out of memory. The limit is actually settable by miners directly, (albeit hard to find under --debug settings on the command line of the node). But why don' they all just do it then? Wouldn't that be the same as setting the hardlimit to 128mb by default by running the client? Actually yes, yes it would be equivalent. So what does that tell you? That the limit is just a psychological limit, a decree , passed from developers from a position of knowledge to those who don't know better. It creates and fosters a 'guru' aura and mentality. One in which the miners surrender their power and determination to developers who 'know better'. This is of course, is a form of psychological control. You tell people that they don't know anything and that they better trust you to take care of them for their own good. This should sound very familiar to many of you who are fans of small government, as this is the classic oligarchy/technocracy. This does not mean that we should not listen to subject matter experts and instead throw them all into a pit and bury them alive (as Emperor Chin did in ancient China), but it does mean that we remain aware that the miners are the ones with the power, and that developers and gurus are there to advise, not to control. If one does not do their own homework, then it is the gurus and advisers who are really running the network.

So, if the limit doesn't matter, why should we all have the default set higher? Does it even make a difference?

Here is where we separate those who are technical in nature to those who are more holistic in knowledge. Contrary to what the techies will say, it DOES matter. Why? Because of psychology and its effects on the free market direction. Much like how the butterflies wings, as insignificant it may seem, can push the leaf ever so slightly, so the droplet of dew drops and trickles down the leeward side of the mountain instead of the windward side, which eventually produces a river than runs inland instead of out to the sea, the simple removal of the limit will result in a psychological change.

This change is the mental impediment that prevents more active funding and development to push up the block processing limit. How? Well to understand this we must first recognize that the block PRODUCTION limit is different (and much smaller than) the block VERIFICATION limit. They are affected by different scaling challenges. The increase to 128mb proposal is for the block VERIFICATION limit, which defines the ACCEPTANCE limit on a block produced. It says nothing about the limit on the actual block size that any miner will create. That indeed is set by miners themselves and that has been shown to have a physical limit of about 21mb right now.

The key is that the network can likely accept much bigger than 32mb blocks. But even if it couldn't, the failure would not be digital. It wouldn't be every node failing once the first 33mb block is produced. Or even when the first 129mb block is produced. That is because, the actual point at which any node fails is INDEPENDENT and unknowable. It really depends on that particular node. How much memory does it have? What is the hardware it is running on? All these things factor into the equation. Thus, if a block larger than what the average number of nodes on the network is produced, the likelihood of that block being rejected and orphaned increases dramatically. So any block producer than starts getting close to that limit will starting losing money in the form of blocks getting orphaned, and the chain reorganized. The network is built to handle these kinds of failures gracefully . It won't be a catastrophic acute failure which is what some gurus would like you to believe, because it keeps them in charge and in the position of telling you what you should and should not do.

So if the maximum accept limit is different for every miner, and node, and the avg limit is something that can be discovered by the free market of miners producing ever larger and larger blocks, why do we need a hard capped block acceptance limit anyway?

Exactly.

Further consider, that if the limit is set, then businesses will not be incentivised to invest time, resources and money into paying developers to help overcome some of the bottleneck issues which prevent some of the nodes from accepting bigger blocks. Why? Simply a matter of ROI and risk . If I was a company and the default SET limit on all the miners was 32mb, then if I paid a team to develop the changes to be able to produced 40mb blocks, then after they were done I would still need to entreat all the other miners to upgrade to my software so that they would be able to accept my bigger blocks (because I don't want the blocks to be orphaned). What if they, for whatever political reasons, refuse? Perhaps they were developing their OWN big block processing nodes in secret? If I cannot convince 51% or more of the network to upgrade to my software then I would have wasted my time and money. This is unacceptable business risk. So the answer is that I as a company won't commit any money to help scale the network up, even though I may have a use case that could use 40mb blocks. I will just sit back and let the open source developers wear that risk and do it themselves. I will be a leech to the system, because there is just too much risk to be an investor in it.

Now assume that there was no limit to the size of any block that a node would accept. Or at the very least, every miner and node were to set them individually and announce to the world what this size was. (this is exactly the premise of Bitcoin Unlimited software). Then, I as a company can invest in hiring developers to improve the block production and acceptance size, feeling safe that once I am successful, then I can then go to the miners and announce that I will be making my software available to them for free to use (or charge them for it, for profit) and if they use it they will be guaranteed to be able to process my 40mb blocks. If they don't well, they may fall out of sync with the network, or they may not. but it will be a true test to see if 51% of the network can accept the bigger blocks, (in which case my block is accepted) or if it less than that then my blocks will be orphaned. But since there isn't any hard coded coordinated block limit, then a cartel of miners trying to coordinate a veto of my big block use case cannot easily form consensus. Compare this with the case where everyone's limit is hard coded to 32mb, where all they would have to do is to agree to do nothing and my efforts will have been wasted. In the case with no limits, as everyone's hardware and nodes are different, nobody can be sure that they won't be left behind when the majority of the network will be accepting of the bigger blocks, so that provides a very strong INCENTIVE for the weaker nodes to upgrade.

And as all bitcoiners know, Bitcoin is all about economic INCENTIVES .