caveden



Offline



Activity: 1106

Merit: 1002









LegendaryActivity: 1106Merit: 1002 Re: How a floating blocksize limit inevitably leads towards centralization February 18, 2013, 08:48:37 PM #21



Also, having fewer participants in a market because these participants are good enough to keep aspiring competitors at bay is not a bad thing. The problem arises when barriers of entry are artificial (legal, bureaucratic etc), not when they're part of the business itself. Barriers of entry as part of the business means that the current market's participants are so advanced that everybody else wanting to enter will have to get at least as good as the current participants for a start.



Quote from: Mike Hearn on February 18, 2013, 07:55:28 PM Removing the block cap means a hard fork, and once we decided to do that we may as well throw in some "no brainer" upgrades as well, like supporting ed25519 which is orders of magnitude faster than ECDSA+secp256k1. Then a single strong machine can go up to hundreds of thousands of transactions per second.



That's cool. Please core devs, consider studying what other hard fork changes would be interesting to put in, because we risk hitting the 1Mb limit quite soon. Great posts from Mike and Gavin in this thread. There's indeed no reason to panic over "too much centralization". Actually, setting an arbitrary limit (or an arbitrary formula to set the limit) is the very definition of "central planning", while letting it get spontaneously set is the very definition of "decentralized order".Also, having fewer participants in a market because these participants are good enough to keep aspiring competitors at bay is not a bad thing. The problem arises when barriers of entry are artificial (legal, bureaucratic etc), not when they're part of the business itself. Barriers of entry as part of the business means that the current market's participants are so advanced that everybody else wanting to enter will have to get at least as good as the current participants for a start.That's cool. Please core devs, consider studying what other hard fork changes would be interesting to put in, because we risk hitting the 1Mb limit quite soon.

Mike Hearn





Offline



Activity: 1526

Merit: 1008







LegendaryActivity: 1526Merit: 1008 Re: How a floating blocksize limit inevitably leads towards centralization February 18, 2013, 09:12:34 PM #24 Quote from: retep on February 18, 2013, 08:35:12 PM Network assurance contracts are far from a sure thing. It's basically an attempt to solve the tragedy of the commons, and the success rate society has had there is pitiful, even with strong central authorities. Assuming they will work is a big risk.



Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.



Quote from: retep on February 18, 2013, 08:35:12 PM I don't see any reason to think CPU power will be the issue. It's network capacity and disk space that is the problem.



1.2 megabytes a second is only ~10 megabits per second - pretty sure my parents house has more bandwidth than that. Google is wiring Kansas City with gigabit fibre right now, and we're not running it as a charity. So network capacity doesn't worry me a whole lot. There's plenty of places in the world that can keep up with a poxy 10 megabits.



3T per month of transfer is again, not a big deal. For a whopping $75 per month bitvps.com will rent you a machine that has 5TB of bandwidth quota per month and 100mbit connectivity.



Lots of people can afford this. But by the time Bitcoin gets to that level of traffic, if it ever does, it might cost more like $75 a year.



Quote from: retep on February 18, 2013, 08:35:12 PM You also have to ask the question, what % of that 3TiB/month results in unspent txouts? Ultimately it's the UTXO set that is the hard limit on the storage requirements for full validating nodes. Even at just 1% volume growth, you're looking at 3GiB/month growth in your requirement for fast-random-access memory.



How did you arrive at 3GB/month? The entire UTXO set currently fits in a few hundred megs of RAM.



Quote from: retep on February 18, 2013, 08:35:12 PM All the time you're spending waiting for transactions to be retrieved from memory is time you aren't hashing.



Why? Hashing happens in parallel to checking transactions and recalculating the merkle root.



Quote from: retep on February 18, 2013, 08:35:12 PM You're example has nothing to do with Bitcoin. Even in the early days it would be obvious to anyone who understood comp-sci that static websites are O(1) scaling per client so there isn't any reason to think you couldn't create websites for as much load as you wanted.



Nobody in 1993 could build a website that the entire world used all the time (like Google or Wikipedia). The technology did not exist.



Following your line of thinking, there should have been some way to ensure only the elite got to use the web. Otherwise how would it work? As it got too popular all the best websites would get overloaded and fall over. Disaster.



Or what about the global routing table? Every backbone router needs a complete copy of the routing table. BGP is a broadcast network. How can the internet backbone scale? Perhaps we should only allow people to access the internet at universities to avoid uncontrollable growth of the routing table.



I just don't see scalability as ever being a problem, assuming effort is put into better software. Satoshi didn't think this would be a problem either, it was one of the first conversations we ever had. These conversations have been going around and around for years. I am unconvinced we're developing better insight into it anymore. Satoshis vision was for the block limit to be removed. So let's do it. Sure, there is some risk, but Kickstarter is showing that the general concept can indeed fund public goods.1.2 megabytes a second is only ~10 megabits per second - pretty sure my parents house has more bandwidth than that. Google is wiring Kansas City with gigabit fibre right now, and we're not running it as a charity. So network capacity doesn't worry me a whole lot. There's plenty of places in the world that can keep up with a poxy 10 megabits.3T per month of transfer is again, not a big deal. For a whopping $75 per month bitvps.com will rent you a machine that has 5TB of bandwidth quota per month and 100mbit connectivity.Lots of people can afford this. But by the time Bitcoin gets to that level of traffic, if it ever does, it might cost more like $75 a year.How did you arrive at 3GB/month? The entire UTXO set currently fits in a few hundred megs of RAM.Why? Hashing happens in parallel to checking transactions and recalculating the merkle root.Nobody in 1993 could build a website that the entire world used all the time (like Google or Wikipedia). The technology did not exist.Following your line of thinking, there should have been some way to ensure only the elite got to use the web. Otherwise how would it work? As it got too popular all the best websites would get overloaded and fall over. Disaster.Or what about the global routing table? Every backbone router needs a complete copy of the routing table. BGP is a broadcast network. How can the internet backbone scale? Perhaps we should only allow people to access the internet at universities to avoid uncontrollable growth of the routing table.I just don't see scalability as ever being a problem, assuming effort is put into better software. Satoshi didn't think this would be a problem either, it was one of the first conversations we ever had. These conversations have been going around and around for years. I am unconvinced we're developing better insight into it anymore. Satoshis vision was for the block limit to be removed. So let's do it.

Zeilap



Offline



Activity: 154

Merit: 100







Full MemberActivity: 154Merit: 100 Re: How a floating blocksize limit inevitably leads towards centralization February 18, 2013, 09:42:38 PM #25 It seems like the requirements for full verification, and those for mining are being conflated. Either way, I see the following solutions.



If you need to run a full verification node, then, as Mike points out, you can rent the hardware to do it along with enough bandwidth. If full verification becomes too much work for a general purpose computer, then we'll begin to see 'node-in-a-box' set-ups where the network stuff is managed by an embedded processor and the computation farmed out to an FPGA/ASIC. Alternatively, we'll see distributed nodes among groups of friends, where each verifies some predetermined subset. This way, you don't have to worry about random sampling. You can then even tell the upstream nodes from you to only send the transactions which are within your subset, so your individual bandwidth is reduced to 1/#friends.



If you want to be a miner, you can run a modified node on rented hardware as above, which simply gives you a small number of transactions to mine on, rather than having to sort them out yourself. This way, you can reduce your bandwidth to practically nothing - you'd get a new list each time a block is mined.

Pieter Wuille





Offline



Activity: 1064

Merit: 1038







LegendaryActivity: 1064Merit: 1038 Re: How a floating blocksize limit inevitably leads towards centralization February 18, 2013, 10:02:52 PM #27



However, with no limit on block size, it effectively becomes miners who are in control of _everyone_'s block size. As a non-miner, this is not something I want them to decide for me. Perhaps the tragedy of the commons can be avoided, and long-term rational thinking will kick in, and miners can be trusted with choosing an appropriate block size. But maybe not, and if just one miner starts creating gigabyte blocks, while all the rest agrees on 10 MiB blocks, ugly block-shunning rules will be necessary to avoid such blocks from filling everyone's hard drive (yes, larger block's slower relay will make them unlikely to be accepted, but it just requires one lucky fool to succeed...).



I think retep raises very good points here: the block size (whether voluntarily or enforced) needs to result in a system that remains verifiable for many. What those many are will probably change gradually. Over time, more and more users will probably move to SPV nodes (or more centralized things like e-wallet sites), and that is fine. But if we give up the ability for non-megacorp entities to be able to verify the chain, we might as well be using those a central clearinghouse. There is of course wide spectrum between "I can download the entire chain on my phone" and "Only 5 bank companies in the world can run a fully verifying node", but I think it's important that we choose what point in between there is acceptable.



My suggestion would be a one-time increase to perhaps 10 MiB or 100 MiB blocks (to be debated), and after that an at-most slow exponential further growth. This would mean no for-eternity limited size, but also no way for miners to push up block sizes to the point where they are in sole control of the network. I realize that some people will consider this an arbitrary and unnecessary limit, but others will probably consider it dangerous already. In any case, it's a compromise and I believe one will be necessary.



Quote from: caveden on February 18, 2013, 08:48:37 PM Great posts from Mike and Gavin in this thread. There's indeed no reason to panic over "too much centralization". Actually, setting an arbitrary limit (or an arbitrary formula to set the limit) is the very definition of "central planning", while letting it get spontaneously set is the very definition of "decentralized order".



Then I think you misunderstand what a hard fork entails. The only way a hard fork can succeed is when _everyone_ agrees to it. Developers, miners, merchants, users, ... everyone. A hard fork that succeeds is the ultimate proof that Bitcoin as a whole is a consensus of its users (and not just a consensus of miners, who are only given authority to decide upon the order of otherwise valid transactions).



Realize that Bitcoin's decentralization only comes from very strict - and sometimes arbitrary - rules (why this particular 50/25/12.5 payout scheme, why ECDSA, why only those opcodes in scripts, ...) that were set right from the start and agreed upon by everyone who ever used the system. Were those rules "central planning" too?

First of all, my opinion: I'm in favor of increasing the block size limit in a hard fork, but very much against removing the limit entirely. Bitcoin is a consensus of its users, who all agreed (or will need to agree) to a very strict set of rules that would allow people to build global decentralized payment system. I think very few people understand a forever-limited block size to be part of these rules.However, with no limit on block size, it effectively becomes miners who are in control of _everyone_'s block size. As a non-miner, this is not something I want them to decide for me. Perhaps the tragedy of the commons can be avoided, and long-term rational thinking will kick in, and miners can be trusted with choosing an appropriate block size. But maybe not, and if just one miner starts creating gigabyte blocks, while all the rest agrees on 10 MiB blocks, ugly block-shunning rules will be necessary to avoid such blocks from filling everyone's hard drive (yes, larger block's slower relay will make them unlikely to be accepted, but it just requires one lucky fool to succeed...).I think retep raises very good points here: the block size (whether voluntarily or enforced) needs to result in a system that remains verifiable for many. What those many are will probably change gradually. Over time, more and more users will probably move to SPV nodes (or more centralized things like e-wallet sites), and that is fine. But if we give up the ability for non-megacorp entities to be able to verify the chain, we might as well be using those a central clearinghouse. There is of course wide spectrum between "I can download the entire chain on my phone" and "Only 5 bank companies in the world can run a fully verifying node", but I think it's important that we choose what point in between there is acceptable.My suggestion would be a one-time increase to perhaps 10 MiB or 100 MiB blocks (to be debated), and after that an at-most slow exponential further growth. This would mean no for-eternity limited size, but also no way for miners to push up block sizes to the point where they are in sole control of the network. I realize that some people will consider this an arbitrary and unnecessary limit, but others will probably consider it dangerous already. In any case, it's a compromise and I believe one will be necessary.Then I think you misunderstand what a hard fork entails. The only way a hard fork can succeed is when _everyone_ agrees to it. Developers, miners, merchants, users, ... everyone. A hard fork that succeeds is the ultimate proof that Bitcoin as a whole is a consensus of its users (and not just a consensus of miners, who are only given authority to decide upon the order of otherwise valid transactions).Realize that Bitcoin's decentralization only comes from very strict - and sometimes arbitrary - rules (why this particular 50/25/12.5 payout scheme, why ECDSA, why only those opcodes in scripts, ...) that were set right from the start and agreed upon by everyone who ever used the system. Were those rules "central planning" too? I do Bitcoin stuff.

cjp



Offline



Activity: 210

Merit: 100









Full MemberActivity: 210Merit: 100 Re: How a floating blocksize limit inevitably leads towards centralization February 18, 2013, 10:09:32 PM #28 Quote from: OhShei8e on February 18, 2013, 08:45:38 PM It is a technical decision, not political. The block size can not be determined on the basis of political beliefs. I'm pretty sure about this.



I disagree. Any decision that has political consequences is a political decision, whether you deny/ignore it or not. I even doubt whether technical, non-political decisions actually exist. You develop technology with a certain goal in mind, and the higher goal is usually of a political nature. So, when you propose a decision, please explicitly list the (political) goals you want to achieve, and all the expected (desired and undesired) (political) side-effects of your proposal. That way, the community might come to an informed consensus about your decision.



Regarding the transaction limit, I see the following effects (any of which can be chosen as goals / "antigoals"):

Increasing/removing the limit can lead to centralization of mining, as described by the OP (competition elimination by bandwidth)

Increasing/removing the limit can lead to reduced security of the network (no transaction scarcity->fee=almost zero->difficulty collapse->easy 51% attack and other attacks). I think this was a mistake of Satoshi, but it can be solved by keeping a reasonable transaction limit (or keep increasing beyond 21M coins, but that would be even less popular in the community).

Centralization of mining can lead to control over mining (e.g. 51% attack, but also refusal to include certain transactions, based on arbitrary policies, possibly enforced by governments on the few remaining miners)

Increasing/removing the limit allows transaction volume to increase

Increasing/removing the limit allows transaction fees to remain low

Increasing/removing the limit increases hardware requirements of full nodes

Also: +100 for Pieter Wuille's post. It's all about community consensus. And my estimate is that 60MiB/block should be sufficient for worldwide usage, if my Ripple-like system becomes successful (otherwise it would have to be 1000 times more). I'd agree with a final limit of 100MiB, but right now that seems way too much, considering current Internet speeds and storage capacity. So I think we need to increase it at least 2 times.

I disagree. Any decision that has political consequences is a political decision, whether you deny/ignore it or not. I even doubt whether technical, non-political decisions actually exist. You develop technology with a certain goal in mind, and the higher goal is usually of a political nature. So, when you propose a decision, please explicitly list the (political) goals you want to achieve, and all the expected (desired and undesired) (political) side-effects of your proposal. That way, the community might come to an informed consensus about your decision.Regarding the transaction limit, I see the following effects (any of which can be chosen as goals / "antigoals"):Also: +100 for Pieter Wuille's post. It's all about community consensus. And my estimate is that 60MiB/block should be sufficient for worldwide usage, if my Ripple-like system becomes successful (otherwise it would have to be 1000 times more). I'd agree with a final limit of 100MiB, but right now that seems way too much, considering current Internet speeds and storage capacity. So I think we need to increase it at least 2 times.

http://cornwarecjp.github.io/amiko-pay/ Donate to: 1KNgGhVJx4yKupWicMenyg6SLoS68nA6S8

hazek



Offline



Activity: 1078

Merit: 1001







LegendaryActivity: 1078Merit: 1001 Re: How a floating blocksize limit inevitably leads towards centralization February 18, 2013, 11:14:46 PM #29 Quote from: OhShei8e on February 18, 2013, 08:45:38 PM Quote from: cjp on February 18, 2013, 07:49:24 PM In the light of this, and because the need for bitcoins primarily comes from the need for a decentralized, no-point-of-control system, I think it's not sufficient to call worries about centralization "vague": you have to clearly defend why this particular form of centralization can not be dangerous. The default is "centralization is bad".



It is a technical decision, not political. The block size can not be determined on the basis of political beliefs. I'm pretty sure about this.



If we talking about centralization we should focus on Mt. Gox but that's a different story.

It is a technical decision, not political. The block size can not be determined on the basis of political beliefs. I'm pretty sure about this.If we talking about centralization we should focus on Mt. Gox but that's a different story.

It is a technical decision, but technical decision about how to provide both scalability and security. And like it or not, decentralization is part of the security equation and must be taken in account when changing anything that would diminish it. It is a technical decision, but technical decision about how to provide both scalability and security. And like it or not, decentralization is part of the security equation and must be taken in account when changing anything that would diminish it. My personality type: INTJ - please forgive my weaknesses (Not naturally in tune with others feelings; may be insensitive at times, tend to respond to conflict with logic and reason, tend to believe I'm always right)



If however you enjoyed my post: 15j781DjuJeVsZgYbDVt2NZsGrWKRWFHpp

hazek



Offline



Activity: 1078

Merit: 1001







LegendaryActivity: 1078Merit: 1001 Re: How a floating blocksize limit inevitably leads towards centralization February 18, 2013, 11:38:38 PM #31 Quote from: cjp on February 18, 2013, 08:22:37 PM Quote from: Mike Hearn on February 18, 2013, 07:55:28 PM I feel these debates have been going on for years. We just have wildly different ideas of what is affordable or not.



I don't think the most fundamental debate is about how high the limit should be. I made some estimates about how high it would have to be for worldwide usage, which is quite a wild guess, and I suppose any estimation about what is achievable with either today's or tomorrow's technology is also a wild guess. We can only hope that what is needed and what is possible will somehow continue to match.



But the most fundamental debate is about whether it is dangerous to (effectively) disable the limit. These are some ways to effectively disable the limit:

actually disabling it

making it "auto-adjusting" (so it can increase indefinitely)

making it so high that it won't ever be reached

I think the current limit will have to be increased at some point in time, requiring a "fork". I can imagine you don't want to set the new value too low, because that would make you have to do another fork in the future. Since it's hard to know what's the right value, I can imagine you want to develop an "auto-adjusting" system, similar to how the difficulty is "auto-adjusting". However, if you don't do this extremely carefully, you could end up effectively disabling the limit, with all the potential dangers discussed here.



You have to carefully choose the goal you want to achieve with the "auto-adjusting", and you have to carefully choose the way you measure your "goal variable", so that your system can control it towards the desired value (similar to how difficulty adjustments steers towards 10minutes/block).



One "goal variable" would be the number of independent miners (a measure of decentralization). How to measure it? Maybe you can offer miners a reward for being "non-independent"? If they accept that reward, they prove non-independence of their different mining activities (e.g. different blocks mined by them); the reward should be larger than the profits they could get from further centralizing Bitcoin. This is just a vague idea; naturally it should be thought out extremely carefully before even thinking of implementing this.



I don't think the most fundamental debate is about how high the limit should be. I made some estimates about how high it would have to be for worldwide usage, which is quite a wild guess, and I suppose any estimation about what is achievable with either today's or tomorrow's technology is also a wild guess. We can only hope that what is needed and what is possible will somehow continue to match.But the most fundamental debate is about whether it is dangerous to (effectively) disable the limit. These are some ways to effectively disable the limit:I think the current limit will have to be increased at some point in time, requiring a "fork". I can imagine you don't want to set the new value too low, because that would make you have to do another fork in the future. Since it's hard to know what's the right value, I can imagine you want to develop an "auto-adjusting" system, similar to how the difficulty is "auto-adjusting". However, if you don't do this extremely carefully, you could end up effectively disabling the limit, with all the potential dangers discussed here.You have to carefully choose the goal you want to achieve with the "auto-adjusting", and you have to carefully choose the way you measure your "goal variable", so that your system can control it towards the desired value (similar to how difficulty adjustments steers towards 10minutes/block).One "goal variable" would be the number of independent miners (a measure of decentralization). How to measure it? Maybe you can offer miners a reward for being "non-independent"? If they accept that reward, they prove non-independence of their different mining activities (e.g. different blocks mined by them); the reward should be larger than the profits they could get from further centralizing Bitcoin. This is just a vague idea; naturally it should be thought out extremely carefully before even thinking of implementing this.

Of all the posts this one makes the most sense to me - a layman. This I'm aware means practically nothing aside from me not being willing to download a new version of Bitcoin-Qt if I wont like the hard fork rules. A carefully chosen auto-adjusting block size limit that makes the space scarce and encourages fees keeping mining reasonable open to competition while solving the scalability issue seems like a good compromise.



But how many of all transactions should on average fit into a block? 90%? 80%? 50%? Can anyone come up with some predictions and estimates how various auto-adjusting rules could potentially play out? Of all the posts this one makes the most sense to me - a layman. This I'm aware means practically nothing aside from me not being willing to download a new version of Bitcoin-Qt if I wont like the hard fork rules. A carefully chosen auto-adjusting block size limit that makes the space scarce and encourages fees keeping mining reasonable open to competition while solving the scalability issue seems like a good compromise.But how many of all transactions should on average fit into a block? 90%? 80%? 50%? Can anyone come up with some predictions and estimates how various auto-adjusting rules could potentially play out? My personality type: INTJ - please forgive my weaknesses (Not naturally in tune with others feelings; may be insensitive at times, tend to respond to conflict with logic and reason, tend to believe I'm always right)



If however you enjoyed my post: 15j781DjuJeVsZgYbDVt2NZsGrWKRWFHpp

Zeilap



Offline



Activity: 154

Merit: 100







Full MemberActivity: 154Merit: 100 Re: How a floating blocksize limit inevitably leads towards centralization February 19, 2013, 12:31:36 AM #32 Quote from: hazek on February 18, 2013, 11:38:38 PM But how many of all transactions should on average fit into a block? 90%? 80%? 50%? Can anyone come up with some predictions and estimates how various auto-adjusting rules could potentially play out?

If you want the worst case then consider this:



Some set of miners decide, as Peter suggests, to increase the blocksize in order to reduce competition. Thinking longterm, they decide that a little money lost now is worth the rewards of controlling a large portion of the mining of the network.

1) The miners create thousands of addresses and send funds between them as spam (this is the initial cost)

a) optional - add enough transaction fee so that legitimate users get upset about tx fees increasing and call for blocksize increases

2) The number of transactions is now much higher than the blocksize allows, forcing the auto-adjust to increase blocksize

3) while competition still exists, goto step 1

4) Continue sending these spam transactions to maintain high blocksize. Added bonus, as the transaction fee you pay is to yourself - i.e. the transaction is free!

5) Profit! If you want the worst case then consider this:Some set of miners decide, as Peter suggests, to increase the blocksize in order to reduce competition. Thinking longterm, they decide that a little money lost now is worth the rewards of controlling a large portion of the mining of the network.1) The miners create thousands of addresses and send funds between them as spam (this is the initial cost)a) optional - add enough transaction fee so that legitimate users get upset about tx fees increasing and call for blocksize increases2) The number of transactions is now much higher than the blocksize allows, forcing the auto-adjust to increase blocksize3) while competition still exists, goto step 14) Continue sending these spam transactions to maintain high blocksize. Added bonus, as the transaction fee you pay is to yourself - i.e. the transaction is free!5) Profit!

justusranvier



Offline



Activity: 1400

Merit: 1006









LegendaryActivity: 1400Merit: 1006 Re: How a floating blocksize limit inevitably leads towards centralization February 19, 2013, 01:11:25 AM #36 Quote from: Pieter Wuille on February 18, 2013, 10:02:52 PM However, with no limit on block size, it effectively becomes miners who are in control of _everyone_'s block size. As a non-miner, this is not something I want them to decide for me. Perhaps the tragedy of the commons can be avoided, and long-term rational thinking will kick in, and miners can be trusted with choosing an appropriate block size. But maybe not, and if just one miner starts creating gigabyte blocks, while all the rest agrees on 10 MiB blocks, ugly block-shunning rules will be necessary to avoid such blocks from filling everyone's hard drive (yes, larger block's slower relay will make them unlikely to be accepted, but it just requires one lucky fool to succeed...). In a different thread Gavin proposed removing the hard limit on block size and adding code to the nodes that would reject any blocks that take too long to verify.



That would give control over the size of the blocks to the people who run full nodes. In a different thread Gavin proposed removing the hard limit on block size and adding code to the nodes that would reject any blocks that take too long to verify.That would give control over the size of the blocks to the people who run full nodes.

jojkaart



Offline



Activity: 97

Merit: 10







MemberActivity: 97Merit: 10 Re: How a floating blocksize limit inevitably leads towards centralization February 19, 2013, 01:22:55 AM #37 How about tying the maximum block size to mining difficulty?



This way, if the fees start to drop, this is counteracted with the shrinking block size. The only time this counteracting won't be effective is when usage is actually dwindling at the same time.

If the fees start to increase, this is also counteracted with increasing the block size as more mining power comes online.



The difficulty also goes up with increasing hardware capabilities, I'd expect that the difficulty increase due to this factor will track the increase of technical capabilities of computers in general.

commonancestor



Offline



Activity: 58

Merit: 0







NewbieActivity: 58Merit: 0 Re: How a floating blocksize limit inevitably leads towards centralization February 19, 2013, 02:42:45 AM #38



Quote from: Pieter Wuille on February 18, 2013, 10:02:52 PM First of all, my opinion: I'm in favor of increasing the block size limit in a hard fork, but very much against removing the limit entirely. Bitcoin is a consensus of its users, who all agreed (or will need to agree) to a very strict set of rules that would allow people to build global decentralized payment system. I think very few people understand a forever-limited block size to be part of these rules.



...



My suggestion would be a one-time increase to perhaps 10 MiB or 100 MiB blocks (to be debated), and after that an at-most slow exponential further growth. This would mean no for-eternity limited size, but also no way for miners to push up block sizes to the point where they are in sole control of the network. I realize that some people will consider this an arbitrary and unnecessary limit, but others will probably consider it dangerous already. In any case, it's a compromise and I believe one will be necessary.



Realize that Bitcoin's decentralization only comes from very strict - and sometimes arbitrary - rules (why this particular 50/25/12.5 payout scheme, why ECDSA, why only those opcodes in scripts, ...) that were set right from the start and agreed upon by everyone who ever used the system. Were those rules "central planning" too?



I tend to agree with Pieter.



First of all, the true nature of Bitcoin seems to be the rigid protocol as it helps the credibility among masses. Otherwise one day you remove block size limit, next day remove ECDSA, then change block frequency to 1 per minute, then print more coins. It actually sounds more appropriate to do such changes under a different implementation.



Then I can't help this: With such floating block limit isn't everyone afraid of chain splits? I can imagine a split occurring by a big block being accepted by 60% of the nodes and rejected by the rest.



Quote from: jojkaart on February 19, 2013, 01:22:55 AM How about tying the maximum block size to mining difficulty?

...

The difficulty also goes up with increasing hardware capabilities, I'd expect that the difficulty increase due to this factor will track the increase of technical capabilities of computers in general.



This sounds interesting.

Interesting debate.I tend to agree with Pieter.First of all, the true nature of Bitcoin seems to be the rigid protocol as it helps the credibility among masses. Otherwise one day you remove block size limit, next day remove ECDSA, then change block frequency to 1 per minute, then print more coins. It actually sounds more appropriate to do such changes under a different implementation.Then I can't help this: With such floating block limit isn't everyone afraid of chain splits? I can imagine a split occurring by a big block being accepted by 60% of the nodes and rejected by the rest.This sounds interesting.

SimonL



Offline



Activity: 113

Merit: 10







MemberActivity: 113Merit: 10 Re: How a floating blocksize limit inevitably leads towards centralization February 19, 2013, 03:03:44 AM #39



I've been stewing over this problem for a while and would just like to think aloud here....



I very much think the blocksize should be network regulated much like difficulty is used to regulate propagation windows based on the amount of computation cycles used to find hashes for particular difficulty targets. To clarify, when I say CPU I mean CPUs, GPUs, and ASICs collectively.



Difficulty is very much focused on the network's collective CPU cycles to control propagation windows (1 block every 10 mins), avoid 51% attacks, and distribute new coins.



However the max_blocksize is not related to computing resources to validate transactions and regular block propagation, it is geared much more to network speed, storage capacity of miners (and includes even non-mining full nodes) and verification of transactions (which as I understand it means hammering the disk). What we need to determine is whether the nodes supporting the network can quickly and easily propagate blocks while not having this affect the propagation window.



Interestingly there is a connection between CPU resources, the calculation of the propagation window with difficulty targets, and network propagation health. If we have no max_blocksize limit in place, it leaves the network open to a special type of manipulation of the difficulty.



The propagation window can be manipulated in two ways as I see it, one is creating more blocks as we classically know, throw more CPUs at block creation, and we transmit more blocks, more computation power = more blocks produced, and the difficulty ensures the propagation window doesn't get manipulated this way. The difficulty is measured by timestamps in the blocks to determine whether more or less blocks in a certain period were created and whether difficulty goes up or down. All taken care of.



The propagation window could also be manipulated in a more subtle way though, that being transmission of large blocks (huge blocks in fact). Large blocks take longer to transmit, longer to verify, and longer to write to disk, though this manipulation of the number of blocks being produced is unlikely to be noticed until a monster block gets pushed across the network (in a situation where there is no limit on blocksize that is). Now because there is only a 10 minute window the block can't take longer than that I'm guessing. If it does, difficulty will sink and we have a whole new problem, that being manipulation of the difficulty through massive blocks. Massive blocks could mess with difficulty and push out smaller miners, causing all sorts of undesirable centralisations. In short, it would probably destroy the Bitcoin network.



So we need a maximum block size that is high enough that the vast majority of nodes are comfortable with it, and isn't so big that it can be used to manipulate the difficulty by artificially slowing propagation accross the network with massive blocks. With the help of the maintaining of the propagation window through it's difficulty, we may be able to determine whether the propagation of blocks is slowing and whether the max_blocksize should be adjusted down to ensure the propagation window remains stable.



Because the difficulty can be potentially manipulated this way we could possibly have a means of knowing what the Bitcoin network is comfortable with propagating. And it could be determined thusly:



If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize (median being chosen to avoid situations where one malicious entity, or entities tries to arbitrarily push up the max_blocksize limit), and the difficulty is "stable", increase the max_blocksize (say by 10%) for the next difficulty period (say the median is within 20% of the max_blocksize), but if the median size of blocks for the last period is much lower (say less than half the current blocksize_limit), then lower the size by 20% instead.



However, if the If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize and the difficulty is NOT stable, don't increase the max_blocksize since there is a possibility that the network is not currently healthy and increasing or decreasing the max_blocksize is a bad idea. Or alternatively in those situations lower the max_blocksize by 10% for the next difficulty period anyway (not sure if this is a good idea or not though).



In either case the 1mb max_blocksize should be the lowest the blocksize should go to if it continued to shrink. Condensing all that down to pseudocode...



Code: IF(Median(blocksize of last difficulty period) is within 10% of current max_block_size

AND new difficulty is **higher** than previous period's difficulty),

THEN raise max_block_size for next difficulty period by 10%

otherwise,



Code: IF(Median(blocksize of last difficulty period) is within 10% of current max_block_size

AND new difficulty is **lower** than previous period's difficulty),

THEN lower max_block_size for next difficulty period by 10% UNLESS it is less than the minimum of 1mb.



Checking the stability of the last difficulty period and the next one is what determines whether the network is spitting out blocks at a regular rate or not, if the median blocksize of blocks transmitted in the last difficulty period is bumping up against the limit, and difficulty is going down, it could mean a significant number of nodes can't keep up, esp. if the difficulty needs to be moved down, that means that blocks aren't getting to all the nodes in time and hashing capacity is getting cut off because they are too busy verifying the blocks they received. If the difficulty is going up and median block size is bumping up against the limit, then there's a strong indication that nodes are all processing the blocks they receive easily and so raising the max_blocksize limit a little should be OK. The one thing I'm not sure of though is determining whether the difficulty is "stable" or not, I'm very much open to suggestions the best way of doing that. The argument that what is deemed "stable" is arbitrary and could still lead to manipulation of the max_blocksize, just over a longer and more sustained period I think is possible too, so I'm not entirely sure this approach could be made foolproof, how does calculating of difficulty targets take these things into consideration?



OK, guys, tear it apart. I actually posted the below in the max_block_size fork thread but got absolutely no feedback on it so rather than create a new thread to get exposure on it, I am reposting it here in full as something to think about with regards to moving towards having a fairly simple process to create a floating blocksize for the network that is conservative enough to avoid abuse and will work in tandem with difficulty so no new mechanisms need to be made. I know there are probably a number of holes in the idea but I think it's a start and could be made viable so that we get a system that allows blocks to get bigger, but doesn't run out of control such that only large miners can participate, and also avoids situations where manipulations of difficulty could occur if there were no max blocksize limit. Ok, here goes.I've been stewing over this problem for a while and would just like to think aloud here....I very much think the blocksize should be network regulated much like difficulty is used to regulate propagation windows based on the amount of computation cycles used to find hashes for particular difficulty targets. To clarify, when I say CPU I mean CPUs, GPUs, and ASICs collectively.Difficulty is very much focused on the network's collective CPU cycles to control propagation windows (1 block every 10 mins), avoid 51% attacks, and distribute new coins.However the max_blocksize is not related to computing resources to validate transactions and regular block propagation, it is geared much more to network speed, storage capacity of miners (and includes even non-mining full nodes) and verification of transactions (which as I understand it means hammering the disk). What we need to determine is whether the nodes supporting the network can quickly and easily propagate blocks while not having this affect the propagation window.Interestingly there is a connection between CPU resources, the calculation of the propagation window with difficulty targets, and network propagation health. If we have no max_blocksize limit in place, it leaves the network open to a special type of manipulation of the difficulty.The propagation window can be manipulated in two ways as I see it, one is creating more blocks as we classically know, throw more CPUs at block creation, and we transmit more blocks, more computation power = more blocks produced, and the difficulty ensures the propagation window doesn't get manipulated this way. The difficulty is measured by timestamps in the blocks to determine whether more or less blocks in a certain period were created and whether difficulty goes up or down. All taken care of.The propagation window could also be manipulated in a more subtle way though, that being transmission of large blocks (huge blocks in fact). Large blocks take longer to transmit, longer to verify, and longer to write to disk, though this manipulation of the number of blocks being produced is unlikely to be noticed until a monster block gets pushed across the network (in a situation where there is no limit on blocksize that is). Now because there is only a 10 minute window the block can't take longer than that I'm guessing. If it does, difficulty will sink and we have a whole new problem, that being manipulation of the difficulty through massive blocks. Massive blocks could mess with difficulty and push out smaller miners, causing all sorts of undesirable centralisations. In short, it would probably destroy the Bitcoin network.So we need a maximum block size that is high enough that the vast majority of nodes are comfortable with it, and isn't so big that it can be used to manipulate the difficulty by artificially slowing propagation accross the network with massive blocks. With the help of the maintaining of the propagation window through it's difficulty, we may be able to determine whether the propagation of blocks is slowing and whether the max_blocksize should be adjusted down to ensure the propagation window remains stable.Because the difficulty can be potentially manipulated this way we could possibly have a means of knowing what the Bitcoin network is comfortable with propagating. And it could be determined thusly:If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize (median being chosen to avoid situations where one malicious entity, or entities tries to arbitrarily push up the max_blocksize limit), and the difficulty is "stable", increase the max_blocksize (say by 10%) for the next difficulty period (say the median is within 20% of the max_blocksize), but if the median size of blocks for the last period is much lower (say less than half the current blocksize_limit), then lower the size by 20% instead.However, if the If the median size of the blocks transmitted in the last difficulty period is bumping up against the max_blocksize and the difficulty is NOT stable, don't increase the max_blocksize since there is a possibility that the network is not currently healthy and increasing or decreasing the max_blocksize is a bad idea. Or alternatively in those situations lower the max_blocksize by 10% for the next difficulty period anyway (not sure if this is a good idea or not though).In either case the 1mb max_blocksize should be the lowest the blocksize should go to if it continued to shrink. Condensing all that down to pseudocode...otherwise,Checking the stability of the last difficulty period and the next one is what determines whether the network is spitting out blocks at a regular rate or not, if the median blocksize of blocks transmitted in the last difficulty period is bumping up against the limit, and difficulty is going down, it could mean a significant number of nodes can't keep up, esp. if the difficulty needs to be moved down, that means that blocks aren't getting to all the nodes in time and hashing capacity is getting cut off because they are too busy verifying the blocks they received. If the difficulty is going up and median block size is bumping up against the limit, then there's a strong indication that nodes are all processing the blocks they receive easily and so raising the max_blocksize limit a little should be OK. The one thing I'm not sure of though is determining whether the difficulty is "stable" or not, I'm very much open to suggestions the best way of doing that. The argument that what is deemed "stable" is arbitrary and could still lead to manipulation of the max_blocksize, just over a longer and more sustained period I think is possible too, so I'm not entirely sure this approach could be made foolproof, how does calculating of difficulty targets take these things into consideration?OK, guys, tear it apart.