bullioner



Offline



Activity: 156

Merit: 100







Full MemberActivity: 156Merit: 100 review of proposals for adaptive maximum block size February 21, 2013, 07:55:21 PM #1



For focus's sake, please only join in with this thread if you are willing to operate under these assumptions for the scope of the thread:



in the long term, incentive to secure bitcoin's transaction log

must come from transaction fees

must come from transaction fees Bitcoin should have potential scalability for everyone on the planet

to make regular use of it, but equilibria are required to

ensure scaling happens at a manageable rate

I appreciate that not everyone shares those assumptions, but please keep this thread for a discussion that locally accepts them for the sake of the discussion!



The idea of an adaptive algorithm has to be to make use of a negative feedback loop to achieve an desired equilibrium.

The canonical example of this in bitcoin is the way the difficulty of solving a block is related to the frequency of recently-found

blocks, producing an equilibrium around the interval of one block every ten minutes.



In the rest of this post I evaluate several proposals based on the equilibrium they are attempting to establish, making my own thoughts on the matter clearer towards the end.



First up we have:



Quote from: jojkaart on February 19, 2013, 01:22:55 AM How about tying the maximum block size to mining difficulty?

[...]



This provoked a fair bit of discussion. The idea seems to be that miners will quit if it's no longer profitable for them to maintain

the full transaction log and mine. It is unclear what equilibrium objectives this approach has, and I find it difficult to intuitively

say what equilibriums, if any, would be achieved by this adaptation.



Next up we have:



Quote from: Gavin Andresen on February 19, 2013, 03:17:17 PM [...]

Second half-baked thought:



One reasonable concern is that if there is no "block size pressure" transaction fees will not be high enough to pay for sufficient mining.



Here's an idea: Reject blocks larger than 1 megabyte that do not include a total reward (subsidy+fees) of at least 50 BTC per megabyte.



[...]



It should be clear that this does not scale very far. The cost is linear, and is predetermined per unit of data. By the time you've reached blocks of 8 MB, transactors are spending 100% of the monetary base per annum in order to have the transaction log maintained. The equilibrium is that block size is simply limited by the high cost of each transaction, but the equilibrium is not precisely statable in terms of desirable properties of the system as a whole.



We also have:



Quote from: Nagato on February 20, 2013, 08:52:03 AM [...]

How about

*To increase max block size by n%, more than 50% of fee paying transactions(must meet a minimum fee threshold to be counted) during the last difficulty window were not included in the next X blocks. Likewise we reduce the max block size by n%(down to a minimum of 1MB) whenever 100% of all paying transactions are included in the next X blocks.

[...]



The issue with this is how you set the minimum fee threshold? You could set it to a value that makes sense now, but if it turned out that bitcoin could scale really really high, the minimum fee threshold would turn out to be too high itself, and is not itself adaptive. This approach is going along the right lines, but it doesn't seem to stem from an quantitative, fundamental objective to do with the bitcoin network.



Also:



Quote from: misterbigg on February 06, 2013, 03:53:47 PM [...]

Here's yet another alternative scheme:



1) Block size adjustments happen at the same time that network difficulty adjusts



2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.



3) The block size is increased if more than 50% of the blocks in the previous interval have a size greater than or equal to 90% of the max block size. Both of the percentage thresholds are baked in.

[...]



This doesn't take the opportunity to link the size to transaction fees in any way, and seems vulnerable to spamming.



Then there's:



Quote from: TierNolan on February 20, 2013, 09:22:13 AM

Since this locks in a minimum fee per transaction MB, what about scaling it with the square of the fees.



For example, every 2016 blocks, it is updated to



sqrt(MAX_BLOCK_SIZE in MB) = median(fees + minting) / 50

[...]



Same objection as to the 50 BTC per MiB proposal. It just doesn't scale very far before all the value is being eaten by fees.

This time we get to around 64 MiB per block before 100% of the monetary base is spent per annum on securing the transaction log. Again, it is unclear what the objectives are for any equilibrium created.



Nearly finally, we have people suggesting that max block size doubles whenever block reward halves. No dynamic equilibrium is created by doing this, and it's pure hope that the resulting numbers might produce the right incentives for network participants.



It seems to me that the starting point should be "what percentage of the total monetary base should transactors pay each year, in order to secure the transaction log". This is a quantity about the system that has economic meaning and technical meaning within the protocol. It basically keeps its meaning as the system grows. Why transactors? Because in the long term holders pay nothing (the initial-distribution schedule winds down), and thus transactors pay everything. That seems immutable. Why per year? This makes it easy for humans to reason about. Annual amounts can be converted to per-block amounts once we're done setting the value.



Thus, this:



Quote from: misterbigg on February 06, 2013, 09:02:16 AM

1) Block size adjustments happen at the same time that network difficulty adjusts (every 210,000 tx?)



2) On a block size adjustment, the size either stays the same or is increased by a fixed percentage (say, 10%). This percentage is a baked-in constant requiring a hard fork to change.



3) The block size is increased if more than 50% of the blocks in the previous interval have a sum of transaction fees greater than 50BTC minus the block subsidy. The 50BTC constant and the threshold percentage are baked in.



is a pretty decent starting point. It allows unlimited growth of the maximum block size, but as soon as transaction fees, which are what secure the transaction log, dwindle below the threshold, the maximum block size shrinks again. Equilibrium around a desirable property of the system as a whole! Easily expressed as a precise quantitative statement ("long term, transactors should pay n % of the total monetary base per annum to those securing the transaction log, so long as the

max block size is above its floor value").



The exact proposal quoted above sets the amount at 6.25%-12.5% p.a., whereas I intuitively think it should be more like 3%. I would probably also just state it in terms of the mean transaction fee over the last N blocks, as that is more directly linked to the objective than whether half of transactions are above a threshold. 3% p.a. would work out at a mean of 12 BTC per block, so would be 0.0029 BTC per transaction to lift the block size off its 1 MiB floor. Seems about right.



My full reply and subsequent discussion with misterbigg is at

https://bitcointalk.org/index.php?topic=144895.msg1546674#msg1546674 .



Hope that's a useful summary of all the adaptive algorithms proposed so far, even if you don't agree with the assumptions or

my conclusions. Having spent a long time reading all of "How a floating blocksize limit inevitably leads towards centralization" and related threads, I want to summarise the proposals for introducing adaptive maximum block size to the protocol.For focus's sake, please only join in with this thread if you are willing to operate under these assumptions for the scope of the thread:I appreciate that not everyone shares those assumptions, but please keep this thread for a discussion that locally accepts them for the sake of the discussion!The idea of an adaptive algorithm has to be to make use of a negative feedback loop to achieve an desired equilibrium.The canonical example of this in bitcoin is the way the difficulty of solving a block is related to the frequency of recently-foundblocks, producing an equilibrium around the interval of one block every ten minutes.In the rest of this post I evaluate several proposals based on the equilibrium they are attempting to establish, making my own thoughts on the matter clearer towards the end.First up we have:This provoked a fair bit of discussion. The idea seems to be that miners will quit if it's no longer profitable for them to maintainthe full transaction log and mine. It is unclear what equilibrium objectives this approach has, and I find it difficult to intuitivelysay what equilibriums, if any, would be achieved by this adaptation.Next up we have:It should be clear that this does not scale very far. The cost is linear, and is predetermined per unit of data. By the time you've reached blocks of 8 MB, transactors are spending 100% of the monetary base per annum in order to have the transaction log maintained. The equilibrium is that block size is simply limited by the high cost of each transaction, but the equilibrium is not precisely statable in terms of desirable properties of the system as a whole.We also have:The issue with this is how you set the minimum fee threshold? You could set it to a value that makes sense now, but if it turned out that bitcoin could scale really really high, the minimum fee threshold would turn out to be too high itself, and is not itself adaptive. This approach is going along the right lines, but it doesn't seem to stem from an quantitative, fundamental objective to do with the bitcoin network.Also:This doesn't take the opportunity to link the size to transaction fees in any way, and seems vulnerable to spamming.Then there's:Same objection as to the 50 BTC per MiB proposal. It just doesn't scale very far before all the value is being eaten by fees.This time we get to around 64 MiB per block before 100% of the monetary base is spent per annum on securing the transaction log. Again, it is unclear what the objectives are for any equilibrium created.Nearly finally, we have people suggesting that max block size doubles whenever block reward halves. No dynamic equilibrium is created by doing this, and it's pure hope that the resulting numbers might produce the right incentives for network participants.It seems to me that the starting point should be "what percentage of the total monetary base should transactors pay each year, in order to secure the transaction log". This is a quantity about the system that has economic meaning and technical meaning within the protocol. It basically keeps its meaning as the system grows. Why transactors? Because in the long term holders pay nothing (the initial-distribution schedule winds down), and thus transactors pay everything. That seems immutable. Why per year? This makes it easy for humans to reason about. Annual amounts can be converted to per-block amounts once we're done setting the value.Thus, this:is a pretty decent starting point. It allows unlimited growth of the maximum block size, but as soon as transaction fees, which are what secure the transaction log, dwindle below the threshold, the maximum block size shrinks again. Equilibrium around a desirable property of the system as a whole! Easily expressed as a precise quantitative statement ("long term, transactors should pay n % of the total monetary base per annum to those securing the transaction log, so long as themax block size is above its floor value").The exact proposal quoted above sets the amount at 6.25%-12.5% p.a., whereas I intuitively think it should be more like 3%. I would probably also just state it in terms of the mean transaction fee over the last N blocks, as that is more directly linked to the objective than whether half of transactions are above a threshold. 3% p.a. would work out at a mean of 12 BTC per block, so would be 0.0029 BTC per transaction to lift the block size off its 1 MiB floor. Seems about right.My full reply and subsequent discussion with misterbigg is atHope that's a useful summary of all the adaptive algorithms proposed so far, even if you don't agree with the assumptions ormy conclusions.

ArticMine



Offline



Activity: 2268

Merit: 1041





Monero Core Team







LegendaryActivity: 2268Merit: 1041Monero Core Team Re: review of proposals for adaptive maximum block size February 21, 2013, 11:27:34 PM #3 Quote from: bullioner on February 21, 2013, 06:38:51 PM



It seems clear that adaption should occur based on transaction fees, since they are supposed to take over as the main incentive for securing the transaction log once initial distribution winds down further. This means that this is the closest so far to achieving an equilibrium based on incentives which optimise for the properties I, as a bitcoin user, want:



That said, I think the proposed rate is too high. We need to budget what *transactors* in the system should need to pay in order to ensure robust security of the transaction log, and not step too far over that level when designing the equilibrium point. 50 BTC per block works out at 12.5% of the monetary base per annum once all coins are created. This seems excessive, though admittedly it is what *holders* of bitcoins are currently paying via the inflation schedule.



Although it is difficult to estimate, the level of transaction fees required, long term, to maintain the security of the transaction log, should be the starting point when designing the equilibrium via which an adaptive maximum block size will be set (assuming one doesn't buy Mike's optimism about those incentives being solved by other means).



Suppose the system needs 3% of the monetary base to be spent per annum on securing the transaction log. Then, in the long term, that works out at pretty much 12 BTC per block. Could just call it 12 BTC per block from the start to keep it simple. So once the scheme is in place and max block size is still 1 MiB, the mean transaction fee over the last N blocks will need to 0.0029 BTC to provoke an increase in max block size. That seems pretty doable via market forces. Then, block size increases, and mean transaction fee decreases, but total transaction fees remain around the same, until an equilibrium is reached where either block space is no longer scarce, or enough miners, for other reasons, decide to limit transaction rate softly.



So my question is: apart from waving fingers in the air, are there any good ways to estimate what percentage of the monetary base should be spent by users of the system as a whole, per annum, in order to adequately ensure security of the transaction log? It's really a risk management question. As is most of the rest of the design of Bitcoin.

Interesting to see the various proposals for an adaptive protocol level maximum block size.It seems clear that adaption should occur based on transaction fees, since they are supposed to take over as the main incentive for securing the transaction log once initial distribution winds down further. This means that this is the closest so far to achieving an equilibrium based on incentives which optimise for the properties I, as a bitcoin user, want: https://bitcointalk.org/index.php?topic=140233.msg1507328#msg1507328 . That is: first and foremost I want the (transaction log of the) network to be really well secured. Once that is achieved, I want more transactions to be possible, so long as doing so doesn't destroy incentives for those securing the network.That said, I think the proposed rate is too high. We need to budget what *transactors* in the system should need to pay in order to ensure robust security of the transaction log, and not step too far over that level when designing the equilibrium point. 50 BTC per block works out at 12.5% of the monetary base per annum once all coins are created. This seems excessive, though admittedly it is what *holders* of bitcoins are currently paying via the inflation schedule.Although it is difficult to estimate, the level of transaction fees required, long term, to maintain the security of the transaction log, should be the starting point when designing the equilibrium via which an adaptive maximum block size will be set (assuming one doesn't buy Mike's optimism about those incentives being solved by other means).Suppose the system needs 3% of the monetary base to be spent per annum on securing the transaction log. Then, in the long term, that works out at pretty much 12 BTC per block. Could just call it 12 BTC per block from the start to keep it simple. So once the scheme is in place and max block size is still 1 MiB, the mean transaction fee over the last N blocks will need to 0.0029 BTC to provoke an increase in max block size. That seems pretty doable via market forces. Then, block size increases, and mean transaction fee decreases, but total transaction fees remain around the same, until an equilibrium is reached where either block space is no longer scarce, or enough miners, for other reasons, decide to limit transaction rate softly.So my question is: apart from waving fingers in the air, are there any good ways to estimate what percentage of the monetary base should be spent by users of the system as a whole, per annum, in order to adequately ensure security of the transaction log? It's really a risk management question. As is most of the rest of the design of Bitcoin.

This ultimately comes down to pricing Bitcoin transaction cost with respect to the competition and a good metric here is the banking and credit card industry based on the USD. The metric that is missing is the velocity of money. If we use the figures for the USD M2 money supply average cost per transaction becomes 0.5%. This is still very high since it will make Bitcoin barely competitive with credit cards even for small transactions.



The problem here is that we are in effect setting the price in advance rather than letting the market decide. This approach can work to set a minimum amount of revenue for miners at a much lower level where an open ended increase in the block size is constrained by minimum amount of fees. This ultimately comes down to pricing Bitcoin transaction cost with respect to the competition and a good metric here is the banking and credit card industry based on the USD. The metric that is missing is the velocity of money. If we use the figures for the USD M2 money supply http://research.stlouisfed.org/fred2/data/M2V.txt we get for the most recent figures 1.535 per quarter or 6.14 per year. With a 3% of the monetary base thecost per transaction becomes 0.5%. This is still very high since it will make Bitcoin barely competitive with credit cards even for small transactions.The problem here is that we are in effect setting the price in advance rather than letting the market decide. This approach can work to set a minimum amount of revenue for miners at a much lower level where an open ended increase in the block size is constrained by minimum amount of fees. Concerned that blockchain bloat will lead to centralization? Storing less than 4 GB of data once required the budget of a superpower and a warehouse full of punched cards. https://upload.wikimedia.org/wikipedia/commons/8/87/IBM_card_storage.NARA.jpg https://en.wikipedia.org/wiki/Punched_card

gmaxwell

Legendary





Offline



Activity: 3178

Merit: 4298









ModeratorLegendaryActivity: 3178Merit: 4298 Re: review of proposals for adaptive maximum block size February 22, 2013, 12:18:01 AM #5 Any "sum of transaction fees" reduces to miners picking whatever they want, since miners can stuff blocks with 'fake' transaction fees, with only the moderate risk of them getting sniped in the next block.



In any case, this thread fails on the ground that the starting criteria doesn't mention keeping Bitcoin a _decenteralized_ system. If you don't have that as a primary goal, then you can just drop this block-chain consensus stuff and use open transactions and get _vastly_ better scaling.



All those fixed parameters in the 'proposals' are stinking handwaving. If you only care about preventing the fee race to the bottom, you make the change in maximum block size be half the change in difficulty on the upside, 100% of the change on the downside, clamped to be at least some minimum. Doing so eliminates the fee collapse entirely by forcing miners to spend real resources (more hashpower) drive the size up. ... but it doesn't do anything to prevent the loss of decentralization, so I don't know that it solves anything.



Ari



Offline



Activity: 75

Merit: 10







MemberActivity: 75Merit: 10 Re: review of proposals for adaptive maximum block size February 22, 2013, 02:37:49 AM #6 I like Gavin's proposal. (I mean his actual proposal, not the "half-baked thought" quoted above.)



No hard limit, but nodes ignore or refuse to relay blocks that take too long to verify. This discourages blocks that are too large, and "spam" blocks containing lots of transactions not seen on the network before.



This might create an incentive to mine empty blocks. To discourage this, in the case of competing blocks, nodes should favor the block that contains transactions they recognize, and ignore (or delay relaying) the empty block.



markm



Offline



Activity: 2604

Merit: 1038









LegendaryActivity: 2604Merit: 1038 Re: review of proposals for adaptive maximum block size February 22, 2013, 03:38:08 AM #7 Quote from: Ari on February 22, 2013, 02:37:49 AM I like Gavin's proposal. (I mean his actual proposal, not the "half-baked thought" quoted above.)



No hard limit, but nodes ignore or refuse to relay blocks that take too long to verify. This discourages blocks that are too large, and "spam" blocks containing lots of transactions not seen on the network before.



I do not agree that it necessarily has any effect at all on blocks that are "too large", depending on who mines them and who they are directly connected to without intermediation of any of the proposed prejudiced nodes.



The top 51% of hash power can pump out blocks as huge as they choose to, everyone else is disenfranchised. You might as well try to stop a 51% attack by ignoring or refusing any block that contains a payment to a known major manufacturer of ASICs so the 51% attacker won't be able to buy enough ASICs to reach 51%. Oops, too late, they already are there. They but lack an opportunity for "spontaneous order" to hook them up into a "conspiracy" that is simply "emergent", not at all pre-meditated - in particular not premeditated-as-in-foreseen* by whoever got rid of the cap on block size, since they would seem to have apparently imagined some completely different "spontaneous order" than that in which whoever has the most [brute, in this case] force wins?



51% attackers can already do plenty of nasty things, now we're gonna hand them carte blanche to spamflood the whole network into oblivion too?



* No, wait, it has been foreseen, so surely if they implement it anyway it is, literally, pre-meditated, isn't it?



-MarkM-

I do not agree that it necessarily has any effect at all on blocks that are "too large", depending on who mines them and who they are directly connected to without intermediation of any of the proposed prejudiced nodes.The top 51% of hash power can pump out blocks as huge as they choose to, everyone else is disenfranchised. You might as well try to stop a 51% attack by ignoring or refusing any block that contains a payment to a known major manufacturer of ASICs so the 51% attacker won't be able to buy enough ASICs to reach 51%. Oops, too late, they already are there. They but lack an opportunity for "spontaneous order" to hook them up into a "conspiracy" that is simply "emergent", not at all pre-meditated - in particular not premeditated-as-in-foreseen* by whoever got rid of the cap on block size, since they would seem to have apparently imagined some completely different "spontaneous order" than that in which whoever has the most [brute, in this case] force wins?51% attackers can already do plenty of nasty things, now we're gonna hand them carte blanche to spamflood the whole network into oblivion too?* No, wait, it has been foreseen, so surely if they implement it anyway it is, literally, pre-meditated, isn't it?-MarkM-

Free website hosting with PHP, MySQL etc: Browser-launched Crossfire client now online (select CrossCiv server for Galactic Milieu) Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/

markm



Offline



Activity: 2604

Merit: 1038









LegendaryActivity: 2604Merit: 1038 Re: review of proposals for adaptive maximum block size February 22, 2013, 05:00:43 AM

Last edit: February 22, 2013, 06:06:20 AM by markm #9 Quote from: solex on February 22, 2013, 04:22:59 AM



https://bitcointalk.org/index.php?topic=144895.msg1541892#msg1541892



What do you think..?

bullioner, have you seen this proposal?What do you think..?

Same objection as above; its even right there below it where you pointed to.



However I am starting to get a sense that maybe part of why it is not blatant to everyone could be an artifact of scale.



It might be that the sheer size/power/longevity/pocketdeepness of "offender" one imagines it might/would take, as compared to the scale one might be contemplating as an organic progression of "growth of our existing network" or of "adoption rates" is very large.



If you persist in thinking of new players entering the game as newborn from tiny startup / granny's basement it might not seem oh so very likely to be a problem, afterall who are these basement-dwellers compared to the likes of Deepbit and Eligius and other "massive" pools.



But in reality, in the larger scheme of things, our vaunted "most difficult proof of work on the planet", our entire bitcoin network, is puny, tiny, trivially so.



How many puny little ASIC-manufacturing startups are there and how many of their products are deployed so far?



How much "smart money" delayed getting into bitcoins for a year due to there being no point in investing in "to be obsolete any moment now, wait for it, any moment,,, coming up... wait for it...." new hardware? Have you seen any indication yet that such gear could impact difficulty significantly? How many hundreds of millions of dollars, really, does all their currently in production product really add up to so far?



Once you blow a few hundred million on a few regional datacentres doesn't it just make sense to balloon/skyrocket blocksize hard and fast to clear all the obsolete players out of the game? What sense is there in blowing hundreds of millions of dollars on securing a network that cannot even handle your own hundreds of millions of users, (you are, like, facebook or google or yahoo or microsoft scale of userbase, or even something really out of left field like a retirement fund with that kind of number of shareholders considering monetising your "user" (aka shareholder) base by controlling the "pipe" through whch others might be willing to pay to get exposure to them, gosh knows. Left field is a vast, vast field, even without whatever parts of it might also be "outside the box"), but imagining there are no "big boys" out there is maybe rather naive.



Every player, all players, in this current puny prototype prevision of what this kind of software could potentially accomplish, even all of us combined, add up to trivial, tiny, puny, still, even if every chip of every wafer of every ASIC in the pipelines that we know of turns out to work perfectly with no error regions etc (100% yield).



Pretending we are oh so capable of swimming with big fish, oh so tough and resilient, that we should throw away our shark cage seems insanely naive, reckless, foolhardy, stupid, exactly what the sharks hope we will do.



-MarkM-

Same objection as above; its even right there below it where you pointed to.However I am starting to get a sense that maybe part of why it is not blatant to everyone could be an artifact of scale.It might be that the sheer size/power/longevity/pocketdeepness of "offender" one imagines it might/would take, as compared to the scale one might be contemplating as an organic progression of "growth of our existing network" or of "adoption rates" is very large.If you persist in thinking of new players entering the game as newborn from tiny startup / granny's basement it might not seem oh so very likely to be a problem, afterall who are these basement-dwellers compared to the likes of Deepbit and Eligius and other "massive" pools.But in reality, in the larger scheme of things, our vaunted "most difficult proof of work on the planet", our entire bitcoin network, is puny, tiny, trivially so.How many puny little ASIC-manufacturing startups are there and how many of their products are deployed so far?How much "smart money" delayed getting into bitcoins for a year due to there being no point in investing in "to be obsolete any moment now, wait for it, any moment,,, coming up... wait for it...." new hardware? Have you seen any indication yet that such gear could impact difficulty significantly? How many hundreds of millions of dollars, really, does all their currently in production product really add up to so far?Once you blow a few hundred million on a few regional datacentres doesn't it just make sense to balloon/skyrocket blocksize hard and fast to clear all the obsolete players out of the game? What sense is there in blowing hundreds of millions of dollars on securing a network that cannot even handle your own hundreds of millions of users, (you are, like, facebook or google or yahoo or microsoft scale of userbase, or even something really out of left field like a retirement fund with that kind of number of shareholders considering monetising your "user" (aka shareholder) base by controlling the "pipe" through whch others might be willing to pay to get exposure to them, gosh knows. Left field is a vast, vast field, even without whatever parts of it might also be "outside the box"), but imagining there are no "big boys" out there is maybe rather naive.Every player, all players, in this current puny prototype prevision of what this kind of software could potentially accomplish, even all of us combined, add up to trivial, tiny, puny, still, even if every chip of every wafer of every ASIC in the pipelines that we know of turns out to work perfectly with no error regions etc (100% yield).Pretending we are oh so capable of swimming with big fish, oh so tough and resilient, that we should throw away our shark cage seems insanely naive, reckless, foolhardy, stupid, exactly what the sharks hope we will do.-MarkM-

Free website hosting with PHP, MySQL etc: Browser-launched Crossfire client now online (select CrossCiv server for Galactic Milieu) Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/

fergalish



Offline



Activity: 440

Merit: 250







Sr. MemberActivity: 440Merit: 250 Re: review of proposals for adaptive maximum block size February 22, 2013, 03:23:19 PM #10 You'll all have to excuse my stupidity, but what's wrong with unlimited (*) blocks? Let each miner set his own transaction fees and free market competition will ensure that transaction fees are kept low while still keeping the network secure. Surely putting some artificial limit on blockchain size in order to drive up fees is little different to central bankers imposing QE or inflation targets on a currency.



Of course, bitcoin mining will migrate to where there is cheap energy, but this might have a beneficial side effect - more effort will be put into building cheap energy sources (read: renewable (**)) so any given geographical region can contribute to mining and so benefit from incoming tx fees ("free money" for want of a better description).



(*) up to some reasonable limit that shouldn't ever be reached, which will depend on how widespread bitcoin adoption is - visa handles a peak of about 10000 tx per second, assume normal use is 3000 tx/sec, assume 250 bytes per transaction and you'd need on average 0.5GB sized blocks every 10 minutes. For the immediate future, I think there's no reason the blocksize should increase beyond 10MB at the outside.



(**) hopefully renewable but admittedly, not necessarily so.

hazek



Offline



Activity: 1078

Merit: 1001







LegendaryActivity: 1078Merit: 1001 Re: review of proposals for adaptive maximum block size February 22, 2013, 09:47:25 PM

Last edit: February 22, 2013, 11:32:33 PM by hazek #11 The name of the game is keeping the block space as scarce as possible, encouraging fees that will eventually have to cover the costs of securing the network but not making it too scarce so that Bitcoin can scale batter.



It's impossible to know how many fees must be collected on average in a block because of the fluctuating value of bitcoins. It's impossible to know what block space is scarce enough but not too scarce. The thing is.. it was also impossible to know 50BTC would pay for adequate security, let alone now after the halving that 25BTC would.. These are simply rules that the market took and is now reacting to.



And from this I can conclude that no matter what adaptive algorithm could be picked, all of them will be entirely arbitrary and the market will simply have to adjust to them and make them work. As long as the relationships inside the algo produce the right incentives the market will find an equilibrium and that's the best we can hope for.



Ok so with this out of the way, there are a few predictions that can be made and incorporated into such an algorithm.



1) when the limit is reached, it's highly likely fees will go up but only to a point (an equilibrium will likely be found between the amount of transactions per block and how much the market is willing to pay in fees per transaction) so the block size limit must be increased before we reach that point

2) when the limit is increased, fees will go down to a point until it is reached again then again 1)

3) when the limit is increased in combination with more fees being collected it's highly likely the value of bitcoins has also risen

4) when the value of bitcoins rises, less of them per transaction are required to secure the network

5) users will always try to pay as little in fees as possible





With this in mind we can now build an algo with the right connections that the market can then use and adequately adjust to. Remember this is all arbitrary just like the 50BTC block reward.



The first rule of my proposal is that the block size limit must induce enough fees that cover the security costs before an increase is allowed. Second, with an increase it should now take less fees per transaction to secure the network since an increase likely means higher valued bitcoins. Third if this reverses and fees fall under a certain threshold, the size limit must be reduced and the fee per transaction requirement increased until fees are above the bottom threshold. Fourth this is adjusted in sync with the mining difficulty retarget schedule.



Arbitrarily I'll pick 50BTC to be sufficient to ensure security of the network forever. How much the subsidy decreases so much more fees must be collected before the block size limit can be increased. Eventually fees will have to amount towards the entire 50BTC.



Now for the juicy part, how to relate the block size increases with fees.



Let's say, again arbitrarily, that if subsidy + fees on average in the last evaluation period of 2016 blocks exceeds 50BTC, block size limit is doubled and because more transactions fit into a block the fees per transaction to reach the threshold are now less and inline with the theory that more activity means rising value of bitcoins. And every time subsidy + fees per block on average in the last 2016 falls under 25BTC, the block size limit is divided by 2 to the minimum of 1mb.





So in practice this works out to at max 12,5% of the entire monetary base being spent every year on network security regardless of the value of a single bitcoin which is perfectly reasonable that this is a constant since the more bitcoins are worth, the more it become lucrative to perform an attack the more should be spent on security in terms of value. If we reach the limit right now, when the subsidy is still 25BTC, with max 4200 transactions per block this works out to 0,00595238 BTC fees per transaction before the limit is doubled the first time, perfectly reasonable right now at the current exchange rate. When the limit has been doubled 5 times it will allow max 134400 transactions per block, which if we reach the required 25BTC on average fees per previous 2016 blocks amounts to 0,00018601BTC per transaction, if 1 BTC is $1000 by then, this is still just 20 cents per transaction.. If at any time fees per block on average start getting below 50% of 50BTC - subsidy, the block size limit is reduced by half.





So essentially fees have a ceiling, once it's reached miners get more breathing room and fees will drop. Once that ceiling is reached again indicated by the fees collected miners again get more breathing room. If ever there's too much room and fees start getting lower then the space is made scarce in order to encourage higher fees.



What do you think?



p.s.: I have no clue if the numbers I used for max transactions in a block given 1mb size limit are correct, so please let me know if that is wrong and if doubling the size limit doesn't mean doubling the max transactions My personality type: INTJ - please forgive my weaknesses (Not naturally in tune with others feelings; may be insensitive at times, tend to respond to conflict with logic and reason, tend to believe I'm always right)



If however you enjoyed my post: 15j781DjuJeVsZgYbDVt2NZsGrWKRWFHpp

conv3rsion



Offline



Activity: 310

Merit: 250







Sr. MemberActivity: 310Merit: 250 Re: review of proposals for adaptive maximum block size February 22, 2013, 11:15:38 PM #13 That was great Hazek. The only problem I see (as raised by others) is that there is nothing there to "protect decentralization" because as long as the numbers of transactions are continually rising, even at increasing costs per block, so too can block size. The biggest objection I've seen to increasing the block size using an adaptive algorithm like this is that there is the possibility of resource needs increasing to the point of that disenfranchising some portion of Bitcoin users. Personally I don't feel this is a concern because



A) I don't believe it is in Bitcoins interests for the majority of its users to be running full nodes, which is why I point newly interested friends and family members to online wallets

B) I do believe that for even HUGE numbers of transactions (several orders of magnitude larger than now), interested parties with minimal resources could always either have direct or pooled access to full nodes, protecting their interests

C) I believe that Bitcoin will remain as decentralized as it "needs to be", always. This is because those concerned with it becoming too centralized can expend resources (individual or group) to them make it then become less centralized.





The best part of your solution is an implicit agreement with users, which is that if the blocksize and therefore "resources needs" ever increase in the future, so too have the value of Bitcoins. If Bitcoins are worth several thousand USD each, I'm more than happy to purchase many terrabytes of storage to continue acting as a full node regardless of transaction volume and I doubt I'm alone there.

hazek



Offline



Activity: 1078

Merit: 1001







LegendaryActivity: 1078Merit: 1001 Re: review of proposals for adaptive maximum block size February 22, 2013, 11:22:17 PM #14 Quote from: conv3rsion on February 22, 2013, 11:15:38 PM That was great Hazek. The only problem I see (as raised by others) is that there is nothing there to "protect decentralization" because as long as the numbers of transactions are continually rising, even at increasing costs per block, so too can block size.



Yeah I know but I at least wanted to see if there's a possible middle ground as opposed to just lifting that limit entirely which I'm absolutely against.



And as you say if enough fees are collected this means that all Bitcoin users are that much richer and can afford more hardware to handle extra storage costs which with this model shouldn't get out of hand at all because it connects what users are willing to pay with how big the blocks can get. Yeah I know but I at least wanted to see if there's a possible middle ground as opposed to just lifting that limit entirely which I'm absolutely against.And as you say if enough fees are collected this means that all Bitcoin users are that much richer and can afford more hardware to handle extra storage costs which with this model shouldn't get out of hand at all because it connects what users are willing to pay with how big the blocks can get. My personality type: INTJ - please forgive my weaknesses (Not naturally in tune with others feelings; may be insensitive at times, tend to respond to conflict with logic and reason, tend to believe I'm always right)



If however you enjoyed my post: 15j781DjuJeVsZgYbDVt2NZsGrWKRWFHpp

conv3rsion



Offline



Activity: 310

Merit: 250







Sr. MemberActivity: 310Merit: 250 Re: review of proposals for adaptive maximum block size February 22, 2013, 11:35:37 PM #15 Quote from: hazek on February 22, 2013, 11:22:17 PM

Yeah I know but I at least wanted to see if there's a possible middle ground as opposed to just lifting that limit entirely which I'm absolutely against.





But is anyone actually advocating for that? Nobody wants a miner to be able to make a 100GB block today full of free transactions. People just want confidence that Bitcoin can be more than a crappy replacement for wire services with a stupidly small potential number of maximum transactions (at costs that eliminate many potential use cases). But is anyone actually advocating for that? Nobody wants a miner to be able to make a 100GB block today full of free transactions. People just want confidence that Bitcoin can be more than a crappy replacement for wire services with a stupidly small potential number of maximum transactions (at costs that eliminate many potential use cases).

MoonShadow



Offline



Activity: 1708

Merit: 1000









LegendaryActivity: 1708Merit: 1000 Re: review of proposals for adaptive maximum block size February 23, 2013, 02:38:17 AM #16 Quote from: hazek on February 22, 2013, 09:47:25 PM

So essentially fees have a ceiling, once it's reached miners get more breathing room and fees will drop. Once that ceiling is reached again indicated by the fees collected miners again get more breathing room. If ever there's too much room and fees start getting lower then the space is made scarce in order to encourage higher fees.



What do you think?





While fees have a ceiling, they also have a floor, which I think is too high. 12.5% of the monetary base per year? For a mature economy that would be way too high. I would predict that out-of-band methods would undercut the main blockchain for just about everything, functionally reducing the blocksize to under 1MB while users of all size and class desparately attempt to avoid those fees. On the flip side, this would also make institutional mining (like my example in another thread of Wal-Mart sponsoring mining at a loss for the purpose of processing their own free transactions from customers) the dominate form of security. I'm not sure if that is good or bad, overall, but I would consider anything over 3% of the monetary base per year to be excessive for any mature economy. Anything else opens up an opprotunity for a cryptocurrency competitor to undercut Bitcoin outright and eat it's lunch. Keep in mind that the cost overhead of the network functions like a tax, in nearly the same way that inflation of a fiat currency functions like a hidden tax upon the economic base that uses & save in it. While that's not a perfect comparison in the long run for Bitcoins, it should be evident that our network costs should never exceed 3%, and that a better target would be 1.5% or 2%. Of course, that is a metric that is relative to both the size of the economy (which we cannot know in advance) and the actual block subsidy (which we can know in advance).



So to modify your proposal, I'd say that until the block subsidy drops down into that 2% range, the range for doubling or halving the blocksize limit should be between the actual subsidy plus 5% and double the subsidy. While fees have a ceiling, they also have a floor, which I think is too high. 12.5% of the monetary base per year? For a mature economy that would be way too high. I would predict that out-of-band methods would undercut the main blockchain for just about everything, functionally reducing the blocksize to under 1MB while users of all size and class desparately attempt to avoid those fees. On the flip side, this would also make institutional mining (like my example in another thread of Wal-Mart sponsoring mining at a loss for the purpose of processing their own free transactions from customers) the dominate form of security. I'm not sure if that is good or bad, overall, but I would consider anything over 3% of the monetary base per year to be excessive for any mature economy. Anything else opens up an opprotunity for a cryptocurrency competitor to undercut Bitcoin outright and eat it's lunch. Keep in mind that the cost overhead of the network functions like a tax, in nearly the same way that inflation of a fiat currency functions like a hidden tax upon the economic base that uses & save in it. While that's not a perfect comparison in the long run for Bitcoins, it should be evident that our network costs should never exceed 3%, and that a better target would be 1.5% or 2%. Of course, that is a metric that is relative to both the size of the economy (which we cannot know in advance) and the actual block subsidy (which we can know in advance).So to modify your proposal, I'd say that until the block subsidy drops down into that 2% range, the range for doubling or halving the blocksize limit should be between the actual subsidy plus 5% and double the subsidy. "The powers of financial capitalism had another far-reaching aim, nothing less than to create a world system of financial control in private hands able to dominate the political system of each country and the economy of the world as a whole. This system was to be controlled in a feudalist fashion by the central banks of the world acting in concert, by secret agreements arrived at in frequent meetings and conferences. The apex of the systems was to be the Bank for International Settlements in Basel, Switzerland, a private bank owned and controlled by the world's central banks which were themselves private corporations. Each central bank...sought to dominate its government by its ability to control Treasury loans, to manipulate foreign exchanges, to influence the level of economic activity in the country, and to influence cooperative politicians by subsequent economic rewards in the business world."



- Carroll Quigley, CFR member, mentor to Bill Clinton, from 'Tragedy And Hope'

markm



Offline



Activity: 2604

Merit: 1038









LegendaryActivity: 2604Merit: 1038 Re: review of proposals for adaptive maximum block size February 23, 2013, 05:37:56 AM #18 Quote from: hazek on February 22, 2013, 09:47:25 PM Let's say, again arbitrarily, that if subsidy + fees on average in the last evaluation period of 2016 blocks exceeds 50BTC, block size limit is doubled and because more transactions fit into a block the fees per transaction to reach the threshold are now less and inline with the theory that more activity means rising value of bitcoins. And every time subsidy + fees per block on average in the last 2016 falls under 25BTC, the block size limit is divided by 2 to the minimum of 1mb.



Halving ought not be needed, if max is truly properly hard max, and truly properly never raised too much, once it goes up it should never need to go down.



For one thing, anyone who cannot handle the current max will be out of business when the current max does actually get fully used. In fact maybe even before then, as maybe just a few stretches of full blocks in a row each day might choke them up too much to keep up, and if such stretches are long enough, often enough, they might never be able to get caught up on the backlog before the next solid stretch of back-to-back max-size blocks arrives.



So basically, modulo any wiggle room (like max being big enough that only at Christmas or whatever "busy season" do you ever get close to max sized blocks, let alone all in a row day in day out) everyone who cannot handle the current max is out of the game long before a new max comes along.



So whatever the max is at any given period of history, it is already small enough that everyone in the game at that point in history can handle that size.



Therefore there is no need to lower it unless world war, global catastrophe or somesuch causes a devolution back into the barbaric ancient times when people had to get by with a lower max block size.



-MarkM-



EDIT: We are talking about ABSOLUTE MAX BLOCK SIZE here, not the size miners actually happen to make for maximising their profit.

Halving ought not be needed, if max is truly properly hard max, and truly properly never raised too much, once it goes up it should never need to go down.For one thing, anyone who cannot handle the current max will be out of business when the current max does actually get fully used. In fact maybe even before then, as maybe just a few stretches of full blocks in a row each day might choke them up too much to keep up, and if such stretches are long enough, often enough, they might never be able to get caught up on the backlog before the next solid stretch of back-to-back max-size blocks arrives.So basically, modulo any wiggle room (like max being big enough that only at Christmas or whatever "busy season" do you ever get close to max sized blocks, let alone all in a row day in day out) everyone who cannot handle the current max is out of the game long before a new max comes along.So whatever the max is at any given period of history, it is already small enough that everyone in the game at that point in history can handle that size.Therefore there is no need to lower it unless world war, global catastrophe or somesuch causes a devolution back into the barbaric ancient times when people had to get by with a lower max block size.-MarkM-EDIT: We are talking about ABSOLUTE MAX BLOCK SIZE here, not the size miners actually happen to make for maximising their profit.

Free website hosting with PHP, MySQL etc: Browser-launched Crossfire client now online (select CrossCiv server for Galactic Milieu) Free website hosting with PHP, MySQL etc: http://hosting.knotwork.com/