Unlike traditional banking where clients have only a few account numbers, with Bitcoin people can create an unlimited number of accounts (addresses). This can be used to easily track payments, and it improves anonymity. vertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertised sites are not endorsed by theBitcoin Forum. They maybe unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.

Mashuri



Offline



Activity: 135

Merit: 100







Full MemberActivity: 135Merit: 100 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 01:48:38 AM #482 OK, this thread has been a bear to read but I'm glad I did. I understand the desire to limit max block size due to bandwidth limits and I certainly do not want a Google-esque datacenter centralization of mining. Since bandwidth is the primary issue (storage being secondary) then I'm with the people who focus their solutions around bandwidth and not things like profits or hash rate. I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size. Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it. https://www.onename.com/mashuri

https://www.bitrated.com/Mashuri/

MoonShadow



Offline



Activity: 1708

Merit: 1000









LegendaryActivity: 1708Merit: 1000 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 01:52:01 AM #483 Quote from: Mashuri on February 27, 2013, 01:48:38 AM I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size. Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.



The question is, how do we collect accurate data upon propagation time? And then how do we utilize said data in a way that will result in a uniform computation for the entire network? The question is, how do we collect accurate data upon propagation time? And then how do we utilize said data in a way that will result in a uniform computation for the entire network? "The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."



- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'

Mashuri



Offline



Activity: 135

Merit: 100







Full MemberActivity: 135Merit: 100 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 02:02:25 AM

Last edit: February 27, 2013, 06:05:06 AM by Mashuri #484 Quote from: MoonShadow on February 27, 2013, 01:52:01 AM Quote from: Mashuri on February 27, 2013, 01:48:38 AM I like the idea of taking a bandwidth metric, like propagation time, of all connected nodes and using that to determine max block size. Done properly, the bitcoin network should be able to optimize itself to whatever the current average bandwidth happens to be, without overwhelming it.



The question is, how do we collect accurate data upon propagation time? And then how do we utilize said data in a way that will result in a uniform computation for the entire network?

The question is, how do we collect accurate data upon propagation time? And then how do we utilize said data in a way that will result in a uniform computation for the entire network?

Yes, the metric is the hard part. I'm not familiar with the inner workings of the mining software so this may be an amateur question: Is there typically any bandwidth "downtime" during the ~10 minutes a miner is hashing away? If so, could a sort of "speed test" be taken with a uniform sized piece of data between nodes?



EDIT:

Another half-baked thought -- Couldn't each node also report the amount of time it took to download the last block, the aggregate of which could be used for determining size? I think I remember Gavin suggesting something similar. Yes, the metric is the hard part. I'm not familiar with the inner workings of the mining software so this may be an amateur question: Is there typically any bandwidth "downtime" during the ~10 minutes a miner is hashing away? If so, could a sort of "speed test" be taken with a uniform sized piece of data between nodes?EDIT:Another half-baked thought -- Couldn't each node also report the amount of time it took to download the last block, the aggregate of which could be used for determining size? I think I remember Gavin suggesting something similar. https://www.onename.com/mashuri

https://www.bitrated.com/Mashuri/

Realpra



Offline



Activity: 816

Merit: 1000







Hero MemberActivity: 816Merit: 1000 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 06:34:08 AM #486 Quote from: retep on February 18, 2013, 06:08:14 PM Quote from: Gavin Andresen on February 18, 2013, 05:14:32 PM So... I start from "more transactions == more success"



I strongly feel that we shouldn't aim for Bitcoin topping out as a "high power money" system that can process only 7 transactions per second.



Hey, I want a pony too. But Bitcoin is an O(n) system, and we have no choice but to limit n.

Hey, I want a pony too. But Bitcoin is an O(n) system, and we have no choice but to limit n.

Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.



No one ever found flaws in it and those who bothered to read it generally thought it was pretty neat. Just saying. + it requires no hard fork and can coexist with current clients.



This would also kill the malice driven incentive for miners to drive out other miners as it would no longer work (only bother the WHOLE network). Actually O(n*m) where m is the number of full clientsDid any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.No one ever found flaws in it and those who bothered to read it generally thought it was pretty neat. Just saying. + it requires no hard fork and can coexist with current clients.This would also kill the malice driven incentive for miners to drive out other miners as it would no longer work (only bother the WHOLE network).

http://BlochsTech.com Cheap and sexy Bitcoin card/hardware wallet, buy here:

MoonShadow



Offline



Activity: 1708

Merit: 1000









LegendaryActivity: 1708Merit: 1000 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 07:54:15 AM

Last edit: February 27, 2013, 08:09:31 AM by MoonShadow #487 Quote from: Realpra on February 27, 2013, 06:34:08 AM

Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.





Searching the forum for "swarm client" begets nothing. Link?



EDIT: Nevermind, I found it. And I think that the main reason no one ever cited fault was because no one who knew the details of how the bitcoin block is actually constructed bothered to read it, or take your proposal seriously enough to respond. I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks. That or I simply didn't understand it.



For example, pool miners already don't have to verify blocks or transactions. They never even see them, because that is unnecessary. The mining is the hashing of the 80 byte header, nothing more. Only if the primary nonce is exhausted is anything in the dataset of the block rearranged, and that is performed by the pool server. We could have blocks a gig per, and that would have negligible effects on pool miners. And we don't need swarm clients to "verify the blockchain", because all but the most recent has already been verified, unless you are starting up a fresh install of a full client. With light clients we can skip even that part, to a degree. Searching the forum for "swarm client" begets nothing. Link?EDIT: Nevermind, I found it. And I think that the main reason no one ever cited fault was because no one who knew the details of how the bitcoin block is actually constructed bothered to read it, or take your proposal seriously enough to respond. I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks. That or I simply didn't understand it.For example, pool miners already don't have to verify blocks or transactions. They never even see them, because that is unnecessary. The mining is the hashing of the 80 byte header, nothing more. Only if the primary nonce is exhausted is anything in the dataset of the block rearranged, and that is performed by the pool server. We could have blocks a gig per, and that would have negligible effects on pool miners. And we don't need swarm clients to "verify the blockchain", because all but the most recent has already been verified, unless you are starting up a fresh install of a full client. With light clients we can skip even that part, to a degree. "The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."



- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'

TierNolan



Offline



Activity: 1232

Merit: 1006







LegendaryActivity: 1232Merit: 1006 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 10:47:00 AM #488 Quote from: MoonShadow on February 27, 2013, 07:54:15 AM I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks. That or I simply didn't understand it.



It looks like a node picks a random number between 0 and N-1 and then checks transactions where id = tx-hash mod N.



Quote For example, pool miners already don't have to verify blocks or transactions.

In fact, it would be much easier to write software that doesn't do it at all. Atm, the minting fee is much higher than the tx fees, so it is more efficient to just mint and not bother with the hassle of handling transactions.



If there is a 0.1% chance that a transaction is false, then including it in a block effectively costs the miner 25 * 0.1% = 0.025BTC, since if it is invalid, and he wins the block, the block will be discarded by other miners.



P2P pools would be better setup so that they don't risk it, until tx fees are the main source of income.



Quote And we don't need swarm clients to "verify the blockchain", because all but the most recent has already been verified, unless you are starting up a fresh install of a full client. With light clients we can skip even that part, to a degree.



Having each new client verify a random 1% of the blocks would be a reasonable thing to do, if combined with an alert system. This would keep miners honest. It looks like a node picks a random number between 0 and N-1 and then checks transactions where id = tx-hash mod N.In fact, it would be much easier to write software that doesn't do it at all. Atm, the minting fee is much higher than the tx fees, so it is more efficient to just mint and not bother with the hassle of handling transactions.If there is a 0.1% chance that a transaction is false, then including it in a block effectively costs the miner 25 * 0.1% = 0.025BTC, since if it is invalid, and he wins the block, the block will be discarded by other miners.P2P pools would be better setup so that they don't risk it, until tx fees are the main source of income.Having each new client verify a random 1% of the blocks would be a reasonable thing to do, if combined with an alert system. This would keep miners honest. 1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF

zebedee

Hero Member



Offline



Activity: 668

Merit: 500









DonatorHero MemberActivity: 668Merit: 500 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 01:51:31 PM #489 Quote from: retep on February 19, 2013, 06:03:54 AM I would hate to see the limit raised before the most inefficient uses of blockchain space, like satoshidice and coinad, change the way they operate.

Who gets to decide what's inefficient? You? That's precisely the problem - trying to centralize the decision. It should be made by those doing the work according to their own economic incentives and desires.



SD haters (and I'm not particularly a fan) like Luke-jr get to not include their txns. Others like Ozcoin apparently have no issue with the likes of SD and are happy to take their money. Great, everyone gets a vote according to the effort they put in.



Quote from: retep on February 19, 2013, 06:03:54 AM In addition I would hate to see alternatives to raising the limit fail to be developed because everyone assumes the limit will be raised. I also get the sense that Gavin's mind is already made up and the question to him isn't if the limit will be raised, but when and how. That may or may not be actually true, but as long as he gives that impression, and the Bitcoin Foundation keeps promoting the idea that Bitcoin transactions are always going to be almost free, raising the block limit is inevitable.

Ah, now we see your real agenda - you want to fund your pet projects of off-chain transaction consolidation.



If that is such a great idea - and it may well be, I have no problem with it - then please realise that it will get funded.



If it isn't getting funded, then please ask yourself why.



But don't try and force others to subsidize what you want to see happen. Why no do it yourself if it's a winning idea for the end users?



Likely neither you nor the rest are doing it because there's no real economic incentive to do so - for now, perhaps. But that's what entrepreneurship is all about. Who gets to decide what's inefficient? You? That's precisely the problem - trying to centralize the decision. It should be made by those doing the work according to their own economic incentives and desires.SD haters (and I'm not particularly a fan) like Luke-jr get to not include their txns. Others like Ozcoin apparently have no issue with the likes of SD and are happy to take their money. Great, everyone gets a vote according to the effort they put in.Ah, now we see your real agenda - you want to fund your pet projects of off-chain transaction consolidation.If that is such a great idea - and it may well be, I have no problem with it - then please realise that it will get funded.If it isn't getting funded, then please ask yourself why.But don't try and force others to subsidize what you want to see happen. Why no do it yourself if it's a winning idea for the end users?Likely neither you nor the rest are doing it because there's no real economic incentive to do so - for now, perhaps. But that's what entrepreneurship is all about.

zebedee

Hero Member



Offline



Activity: 668

Merit: 500









DonatorHero MemberActivity: 668Merit: 500 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 02:49:57 PM #490 Quote from: johnyj on February 19, 2013, 03:53:18 PM

Before, I support the change to protocol in a carefully planned way to improve the end user experience, but recently I discovered that you can double spend on both original chain and the new chain after a hard fork, then it means the promise of prevent double-spending and limited supply is all broken, that is much severe than I thought





That simply means that, after a very short period of chaos post-fork, simple economic incentives would VERY quickly force a consensus on one of the chains. The chaos would not be permitted to continue, by anyone, whichever side they personally want to "win", as it would cost them too much. That simply means that, after a very short period of chaos post-fork, simple economic incentives would VERY quickly force a consensus on one of the chains. The chaos would not be permitted to continue, by anyone, whichever side they personally want to "win", as it would cost them too much.

MoonShadow



Offline



Activity: 1708

Merit: 1000









LegendaryActivity: 1708Merit: 1000 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 04:16:55 PM #491 Quote from: TierNolan on February 27, 2013, 10:47:00 AM Quote from: MoonShadow on February 27, 2013, 07:54:15 AM I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks. That or I simply didn't understand it.



It looks like a node picks a random number between 0 and N-1 and then checks transactions where id = tx-hash mod N.



It looks like a node picks a random number between 0 and N-1 and then checks transactions where id = tx-hash mod N.

Who any why? That's the vague part. Are miner's not checking the blocks themselves, are they depending upon others to spot check sections? How does that work, since it's the miners who will feel the losses should they mine a block with an invalid transaction? Realisticly, it'd be at least as effective to permit non-mining full clients to 'spot check' blocks in full, but on a random scale. Say, only 30% of the blocks that they see do they check before they forward. All blocks should be fully checked before intergrated into the local blockchain, and I can't see a way around that process.



Quote

Having each new client verify a random 1% of the blocks would be a reasonable thing to do, if combined with an alert system. This would keep miners honest.



But the miners would still need to check those blocks, and eventually so would everyone else. This could introduce a new network attack vector. Who any why? That's the vague part. Are miner's not checking the blocks themselves, are they depending upon others to spot check sections? How does that work, since it's the miners who will feel the losses should they mine a block with an invalid transaction? Realisticly, it'd be at least as effective to permit non-mining full clients to 'spot check' blocks in full, but on a random scale. Say, only 30% of the blocks that they see do they check before they forward. All blocks should be fully checked before intergrated into the local blockchain, and I can't see a way around that process.But the miners would still need to check those blocks, and eventually so would everyone else. This could introduce a new network attack vector. "The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."



- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'

TierNolan



Offline



Activity: 1232

Merit: 1006







LegendaryActivity: 1232Merit: 1006 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 05:09:55 PM #492 Quote from: MoonShadow on February 27, 2013, 04:16:55 PM But the miners would still need to check those blocks, and eventually so would everyone else. This could introduce a new network attack vector.



I think miners are going to need to verify everything, at the end of the day. However, it may be possible to do that in a p2p way.



I made a suggestion in another thread about having "parity" rules for transactions.



A transaction of the form:



Input 0: tx_hash=1234567890/out=2

Input 1: tx_hash=2345678901/out=1

Output 0: <some script>



would have an mixed parity, since the it its inputs come from some transaction with an odd hash and some with an even hash.



However, a parity rule could be added that requires either odd or even parity.



Input 0: tx_hash=1234567897/out=0

Input 1: tx_hash=2345678901/out=1

Output 0: <some script>



If the block height is even, then only even parity transactions would be allowed, and vice-versa for odd.



If a super-majority of the network agreed with that rule, then it wouldn't cause a fork. Mixed parity blocks would just be orphaned.



The nice feature of the rule is that it allows blocks to be prepared in advance.



If the next block is an odd block, then a P2P miner system could broadcast a list of proposed transactions for inclusion and have them verified. As long as all the inputs into the proposed transactions are from even transactions, they won't be invalidated by the next block. It will only have transactions with inputs from odd transactions, under the rule.



This gives the P2P system time to reject invalid transactions.



All nodes on the network could be ready to switch to the next block immediately, without having to even read the new block (other than check the header). Verification could happen later. I think miners are going to need to verify everything, at the end of the day. However, it may be possible to do that in a p2p way.I made a suggestion in another thread about having "parity" rules for transactions.A transaction of the form:Input 0: tx_hash=123456789/out=2Input 1: tx_hash=234567890/out=1Output 0: would have an mixed parity, since the it its inputs come from some transaction with an odd hash and some with an even hash.However, a parity rule could be added that requires either odd or even parity.Input 0: tx_hash=123456789/out=0Input 1: tx_hash=234567890/out=1Output 0: If the block height is even, then only even parity transactions would be allowed, and vice-versa for odd.If a super-majority of the network agreed with that rule, then it wouldn't cause a fork. Mixed parity blocks would just be orphaned.The nice feature of the rule is that it allows blocks to be prepared in advance.If the next block is an odd block, then a P2P miner system could broadcast a list of proposed transactions for inclusion and have them verified. As long as all the inputs into the proposed transactions are from even transactions, they won't be invalidated by the next block. It will only have transactions with inputs from odd transactions, under the rule.This gives the P2P system time to reject invalid transactions.All nodes on the network could be ready to switch to the next block immediately, without having to even read the new block (other than check the header). Verification could happen later. 1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF

Realpra



Offline



Activity: 816

Merit: 1000







Hero MemberActivity: 816Merit: 1000 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 05:55:28 PM #493 Quote from: MoonShadow on February 27, 2013, 07:54:15 AM Quote from: Realpra on February 27, 2013, 06:34:08 AM

Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.





Searching the forum for "swarm client" begets nothing. Link? Searching the forum for "swarm client" begets nothing. Link? https://bitcointalk.org/index.php?topic=87763.0

(Second search link )



Quote I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks. That or I simply didn't understand it. The details are a little hairy, but it is actually very simple: It is difficult to validate, BUT easy to show a flaw in a block.



To show a block is invalid just one S-client needs to share with the rest of the network that it has a double spend. This accusation can be proved by sending along the transaction history for the address in question.

This history cannot be faked due to the nature of the blocks tree-data-structure.



Even if the S-clients keep a full history of each address they watch and exchange this in cases of accusations the computer power saved should still be substantial despite many addresses being tangled together.



There was also talk of combining this with a 5-10 year ledger system which would put a cap on the running blockchain size. (Second search linkThe details are a little hairy, but it is actually very simple: It is difficult to validate, BUT easy to show a flaw in a block.To show a block is invalid just one S-client needs to share with the rest of the network that it has a double spend. This accusation can be proved by sending along the transaction history for the address in question.This history cannot be faked due to the nature of the blocks tree-data-structure.Even if the S-clients keep a full history of each address they watch and exchange this in cases of accusations the computer power saved should still be substantial despite many addresses being tangled together.There was also talk of combining this with a 5-10 year ledger system which would put a cap on the running blockchain size.

http://BlochsTech.com Cheap and sexy Bitcoin card/hardware wallet, buy here:

MoonShadow



Offline



Activity: 1708

Merit: 1000









LegendaryActivity: 1708Merit: 1000 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 08:08:53 PM #494 Quote from: Realpra on February 27, 2013, 05:55:28 PM Quote from: MoonShadow on February 27, 2013, 07:54:15 AM Quote from: Realpra on February 27, 2013, 06:34:08 AM

Did any of you guys remember my "swarm client" idea? It would move Bitcoin from being O(n*m) to O(n) and the network would share the load of storage and processing both.





Searching the forum for "swarm client" begets nothing. Link? Searching the forum for "swarm client" begets nothing. Link? https://bitcointalk.org/index.php?topic=87763.0

(Second search link )



Quote I read your proposal, and could find no details about how a swarm client could actually divide up the task of verification of blocks. That or I simply didn't understand it. The details are a little hairy, but it is actually very simple: It is difficult to validate, BUT easy to show a flaw in a block.



To show a block is invalid just one S-client needs to share with the rest of the network that it has a double spend. This accusation can be proved by sending along the transaction history for the address in question.

This history cannot be faked due to the nature of the blocks tree-data-structure.



(Second search linkThe details are a little hairy, but it is actually very simple: It is difficult to validate, BUT easy to show a flaw in a block.To show a block is invalid just one S-client needs to share with the rest of the network that it has a double spend. This accusation can be proved by sending along the transaction history for the address in question.This history cannot be faked due to the nature of the blocks tree-data-structure.

Not true. A double spend would occur at nearly the same time. Due to propogation rules that apply to loose transactions, it's very unlikely that any single node (swarm or otherwise) will actually see both transactions. And what if it did? If it could sound an alarm about it, which one is the valid one? The nodes cannot tell. And even responding to an alarm impies some degree of trust in the sender, which open up an attack vecotr if an attacker can spoof nodes and flood the network with false alarms.





Furthermore, a double spend can't eget into a block even if that miner doesn't bother to validatie it first, since that would imply that the miner is participating in an attack on the network himself, since he shouldn't be able to see both competing transactions.

Quote Even if the S-clients keep a full history of each address they watch and exchange this in cases of accusations the computer power saved should still be substantial despite many addresses being tangled together.



This would serve little purpose, since addresses are created and abandoned at such a rapid rate.



Quote There was also talk of combining this with a 5-10 year ledger system which would put a cap on the running blockchain size.



Pruning would also put a cap on the running blockchain size, and doesn't require a hard code fork. It's also the purpose of the myrkle tree from the beginning. Satoshi thought about that, too.

Not true. A double spend would occur at nearly the same time. Due to propogation rules that apply to loose transactions, it's very unlikely that any single node (swarm or otherwise) will actually see both transactions. And what if it did? If it could sound an alarm about it, which one is the valid one? The nodes cannot tell. And even responding to an alarm impies some degree of trust in the sender, which open up an attack vecotr if an attacker can spoof nodes and flood the network with false alarms.Furthermore, a double spend can't eget into a block even if that miner doesn't bother to validatie it first, since that would imply that the miner is participating in an attack on the network himself, since he shouldn't be able to see both competing transactions.This would serve little purpose, since addresses are created and abandoned at such a rapid rate.Pruning would also put a cap on the running blockchain size, and doesn't require a hard code fork. It's also the purpose of the myrkle tree from the beginning. Satoshi thought about that, too. "The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."



- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'

tvbcof



Offline



Activity: 3332

Merit: 1140







LegendaryActivity: 3332Merit: 1140 Re: How a floating blocksize limit inevitably leads towards centralization February 27, 2013, 09:26:20 PM #495 Quote from: MoonShadow on February 27, 2013, 08:08:53 PM ...



Pruning would also put a cap on the running blockchain size, and doesn't require a hard code fork. It's also the purpose of the myrkle tree from the beginning. Satoshi thought about that, too.





It strikes me that Satoshi seemed more sensitive to system footprint than many of those who came after. Both in design and in configuration he seemed to have left Bitcoin in a condition which was suitable more for a reliable backing and clearing solution than as a competitive replacement for centralized systems such as PayPal.



By this I mean that the latency inherent in the Bitcoin-like family of crypto-currencies are always going to be a sore point for Joe sixpack to use in native and rigorous form for daily purchases. And the current block size is a lingering artifact of the time period of his involvement (actually a guess on my part without looking through the repository.)



I was disappointed that (now) early development focus was on wallet encryption, prettying up the GUI, and the multi-sig stuff if this came at the expense of merkle-tree pruning work. I personally decided to make lemonade of lemons to some extent in noting that although I thought the priorities and direction were a bit off, the chosen course would probably balloon the market cap more quickly and I could try to make a buck off it no matter what the end result of Bitcoin might be.



It strikes me that Satoshi seemed more sensitive to system footprint than many of those who came after. Both in design and in configuration he seemed to have left Bitcoin in a condition which was suitable more for a reliable backing and clearing solution than as a competitive replacement for centralized systems such as PayPal.By this I mean that the latency inherent in the Bitcoin-like family of crypto-currencies are always going to be a sore point for Joe sixpack to use in native and rigorous form for daily purchases. And the current block size is a lingering artifact of the time period of his involvement (actually a guess on my part without looking through the repository.)I was disappointed that (now) early development focus was on wallet encryption, prettying up the GUI, and the multi-sig stuff if this came at the expense of merkle-tree pruning work. I personally decided to make lemonade of lemons to some extent in noting that although I thought the priorities and direction were a bit off, the chosen course would probably balloon the market cap more quickly and I could try to make a buck off it no matter what the end result of Bitcoin might be. sig spam anywhere and self-moderated threads on the pol&soc board are for losers.

MoonShadow



Offline



Activity: 1708

Merit: 1000









LegendaryActivity: 1708Merit: 1000 Re: How a floating blocksize limit inevitably leads towards centralization February 28, 2013, 12:05:46 AM #497 Quote from: misterbigg on February 27, 2013, 11:16:00 PM Quote from: MoonShadow on February 27, 2013, 01:52:01 AM ...how do we collect accurate data upon propagation time? And then how do we utilize said data ...

Quite simply, you don't. There is no obvious way to collect these statistics in a way that is not vulnerable to spoofing or gaming by miners. That's why I advocate the voting method in my other post.

Quite simply, you don't. There is no obvious way to collect these statistics in a way that is not vulnerable to spoofing or gaming by miners. That's why I advocate the voting method in my other post.

Ah, yeah. That's why I asked the question that way, because I didn't think that it could be done, and was highlighting the root problem with this method. Ah, yeah. That's why I asked the question that way, because I didn't think that it could be done, and was highlighting the root problem with this method. "The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."



- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'

misterbigg



Offline



Activity: 1064

Merit: 1001









LegendaryActivity: 1064Merit: 1001 Re: How a floating blocksize limit inevitably leads towards centralization February 28, 2013, 03:00:30 AM #498 Quote from: MoonShadow on February 28, 2013, 12:05:46 AM That's why I asked the question that way, because I didn't think that it could be done, and was highlighting the root problem with this method.

Yep, I don't think it can be done either. At least, not in a way that can't be gamed. And any system which can be gamed, is really no different than a voting system. So, might as well just make it a voting system and let each miner decide the criteria for how to vote.

Yep, I don't think it can be done either. At least, not in a way that can't be gamed. And any system which can be gamed, is really no different than a voting system. So, might as well just make it a voting system and let each miner decide the criteria for how to vote.

zebedee

Hero Member



Offline



Activity: 668

Merit: 500









DonatorHero MemberActivity: 668Merit: 500 Re: How a floating blocksize limit inevitably leads towards centralization February 28, 2013, 03:04:21 AM #499 Quote from: misterbigg on February 21, 2013, 07:15:12 AM Quote from: Nagato on February 21, 2013, 07:03:11 AM If we want to cap the time of downloading overhead the latest block to say 1%, we need to be able to download the MAX_BLOCKSIZE within 6 seconds on average so that we can spend 99% time hashing.



At 1MB, you would need a ~1.7Mbps connection to keep downloading time to 6s.

At 10MB, 17Mbps

At 100MB, 170Mbps



and you start to see why even 100MB block size would render 90% of the world population unable to participate in mining.

Even at 10MB, it requires investing in a relatively high speed connection.

Thank you. This is the most clear explanation yet that explains how an increase in the maximum block size raises the minimum bandwidth requirements for mining nodes.

Thank you. This is the most clear explanation yet that explains how an increase in the maximum block size raises the minimum bandwidth requirements for mining nodes.



Fake headers / tx lists that don't match the actual body? That's a black mark for the dude who gave it to you as untrustworthy. Too many black marks and you ignore future "headers" from him as a proven time-waster.



Build up trust with your peers, just like real life. Hmm. Header can be downloaded in parallel / separately to the block body, and hashing can start after receiving just the header. Milliseconds amount of time. Perhaps a "quick" list of outputs spent by the block would be useful for building non-trivial blocks that don't include double-spends, but that would be ~5% of the block size? Plenty of room for "optimization" here were it ever an issue.Fake headers / tx lists that don't match the actual body? That's a black mark for the dude who gave it to you as untrustworthy. Too many black marks and you ignore future "headers" from him as a proven time-waster.Build up trust with your peers, just like real life.