Possible method of compromise in the max block size issue:

First, do some sort of normal max block size increase proposal (BIP 100, flex cap, etc.) with pretty not-very-conservative constants that a lot of people would accept, but still reasonable enough that it should hopefully always work.

Second, make it so that each full node automatically sets an individual hard max block size (overriding the other one) according to what it can support for the foreseeable future. Like:

---

def get_local_hard_maximums(this_computer, UTXO_SIZE):

# we don't want to spend too much time receiving a block

TARGET_RECV_TIME = 5 seconds

# we don't want to spend too much time uploading a block or the

# network will stop working properly

TARGET_PROPAGATION_TIME = 30 seconds

# this hardware will probably be upgraded eventually

HARDWARE_LIFETIME = 4 years

BLOCK_INTERVAL = 10 minutes

# do benchmarks to see what this user can support

cpu_sigops_per_s = this_computer.benchmark.get_max_sigops_per_s()

free_space_GB = this_computer.get_free_disk_space()

upload_speed = this_computer.benchmark.get_upload_speed()

# ask the user what they will accept

show_user_gui("

How much CPU can Bitcoin use (burst)?

|1% -------------------------100%|

How much disk space can Bitcoin use?

|5GB ------------------------free_space_GB|

How much upload can Bitcoin use (burst)?

|1 Mbit/s ------------------------upload_speed|

")

cpu_sigops_per_s *= user_cpu_percentage

free_space_GB = user_free_space_GB

upload_speed = user_upload_speed

# calculate what this computer can support

max_sigops_per_block = cpu_sigops_per_s * TARGET_RECV_TIME

max_net_utxos_per_block = free_space_GB / UTXO_SIZE

/ ((HARDWARE_LIFETIME in minutes)*BLOCK_INTERVAL)

max_block_size = upload_speed * TARGET_PROPAGATION_TIME

# round down to specific values so that groups of nodes act together

# and don't get picked off one by one (maybe this should be more fancy)

max_sigops_per_bloc = round down to the nearest multiple of 1000

max_net_utxos_per_block = round down to the nearest multiple of 500

max_block_size = round down to the nearest multiple of 0.5 MB

return max_sigops_per_block, max_net_utxos_per_block, max_block_size

---

If the software detects that it's rejecting a very long chain due to local hard maximums:

- If the user chose to have Bitcoin use less than 25% of any resource, Bitcoin should say "Increasing this percentage is required in order to be a full node. (Buttons:) [Do it] [Switch to lightweight mode]"

- If Bitcoin is already using more than 25% of all resources, Bitcoin should say *something* like this (hard to figure out good wording), "If you feel that this is a fairly new and well-equipped computer which should be able to connect to Bitcoin as a first-class citizen, then you should continue on in protest of the miners who are creating too-large blocks and the other participants in the Bitcoin economy who are accepting these blocks. You will probably be unable to transact with many Bitcoin businesses -- you should complain to these businesses. If you have a low-end computer, you should use lightweight mode. [Continue] [Switch to lightweight mode] [Adjust resource usage]"

Businesses will want to set their max block size to be the minimum of what their customers can accept instead of only basing things on their capabilities. Maybe this info can be transmitted by the payment protocol, and Bitcoin Core could have some built-in method of processing this polling data reasonably.

Then the ordinary Bitcoin *users* will by default automatically act together as a unified economic force, which will be influential. I think that this sort of method alone (probably plus a lot of tweaking) could completely replace any global max block size, but it'd probably always be at least somewhat messy. But I think that it make sense as a "backup" max block size method to sort-of-guarantee decentralization into the future.