Peter R



Offline



Activity: 1162

Merit: 1007









LegendaryActivity: 1162Merit: 1007 Re: Gold collapsing. Bitcoin UP. July 04, 2015, 10:04:17 PM

Last edit: July 07, 2015, 02:11:46 PM by Peter R #28121 Quote from: cypherdoc on July 04, 2015, 08:02:47 PM the increased frequency of SPV mining has occurred precisely b/c of the more consistently filled 1MB blocks and deviant defensive strategies being employed to navigate that congestion...otherwise, you'd have to be arguing that what once wasn't a problem with blocks <1MB have to now be occurring precisely b/c 1MB is a magic number at which blocks are deemed "too big". what is the chances of that?



Cypher, you're brilliant!



Evidence of an effective blocksize limit: no protocol-enforced limit required



In this post we show that, given a few simplifying assumptions*, the network will automatically enforce an effective blocksize limit. This effective limit scales in an automatic fashion with improvements in technology, without requiring an explicit limit being enforced at the protocol level.



Background: We learned from last night's fork that miners are incentivized to mine "empty" blocks while they work to process the previously solved block. effective blocksize limit.



Let τ be the time it takes to process a typical block and let T be the average block time (10 min). [CLARIFICATION: τ includes all delays between the moment the miner has enough information to begin mining (an empty block) on the block header, to the moment he's processed the previous block, created a new non-empty block template, and has his hashing power working on that new non-empty block] . The fraction of time the miner is hashing on an empty block is clearly τ / T; the fraction of the time the miner is hashing on a non-empty block is 1 - τ / T = (T - τ) / T. We will assume that every miner applies the same policy of producing empty SPV blocks for time 0<t<τ, and blocks of size S' for t > τ.



Under these conditions, the expectation value of the blocksize is equal to the expectation value of the blocksize on the interval 0<t<τ, plus the expectation value of the blocksize during the interval τ<t<T.



S effective = ~0 [(τ / T)] + S' [(T - τ) / T]

= S' [(T - τ) / T] (Eq. 1)



The time, τ, it takes to process a block is not constant, but rather is assumed to depend linearly** on the size of the block. Approximating the size of the previous block as S', we get:



τ = k S'



where k is the number of minutes if takes to process on average, 1 MB of transactional data. Substituting this into Eq. (1) yields the following equation:



S effective = S' (T - k S') / T

= S' - (k/T) S'2



This is the equation for a concave down parabola, as shown:







To find the maximum of this curve, we take its partial derivative with respect to S' and set it to zero:



d S effective / d S' = 1 - 2 k S' / T = 0



Solving the above equation for S' gives



S' = T / (2 k)



This (S') is the blocksize that maximizes the transactional capacity of the network. Substituting this result back into our equation for the effective blocksize limit gives:



S effective = T / (4 k)



Some numbers :



Assume it takes on average 15 seconds*** to process a typical 1 MB block (k =0.25 min / MB). Since T = 10 min, this means the maximum average blocksize (network capacity) is limited to:



S effective = T / (4 k) = (10 min) / (4 x 0.25 min / MB)

= 10 MB.



QED. We've shown that there exists a limit on the maximum value of the average blocksize, due to the time it takes to process and verify a block, irrespective of any protocol enforced limits.





*There's a few other assumptions that I don't detail above, but it's Saturday and I'm enjoying the sunshine.

**Here we're also assuming that the amount of ECDSA verify operations in a block is roughly proportional to the block's size.

***This is a guess. We should estimate it by looking at the ratio of empty blocks to non-empty blocks produced by F2Pool.









**********UPDATE**********



Quote from: TheRealSteve on July 05, 2015, 12:23:37 AM

For antpool, this is 4506 / 246.

See also: Empty blocks [bitcointalk.org]

the last 27,027 blocks (basically since jan 1st 2015), f2pool-attributed blocks: 5241, of which coinbase-only: 139For antpool, this is 4506 / 246.

Awesome! Thanks!!



We can estimate the average effective time it takes to process the blocks, then, as



τ ~= T (N empty / N notempty )

~= T (N empty / (N total - N empty ))



F2Pool:



~= (10 min) x [139 / (5241 - 139)] = 16.3 seconds



AntPool:



~= (10 min) x [246 / (4506 - 246)] = 34.6 seconds

Cypher, you're brilliant!In this post we show that, given a few simplifying assumptions, the network will automatically enforce an. Thisscales in an automatic fashion with improvements in technology, without requiring an explicit limit being enforced at the protocol level.We learned from last night's fork that miners are incentivized to mine "empty" blocks while they work to process the previously solved block. This increases their revenue, as shown here . However, this also means that the maximum possible value of the average blocksize is reduced in proportion to the frequency of these "empty blocks." For example, if 10% of the blocks were guaranteed to be empty, the maximum value of the average blocksize would presently be 900 kB, rather than 1 MB. We show that as the average size of the blocks increases, the percentage of empty blocks increases in direct proportion, thereby providing a counterbalancing force that serves to limit the blockchain's growth rate. We will refer to the "maximum value of the average blocksize" as theLet τ be the time it takes to process a typical block and let T be the average block time (10 min).. The fraction of time the miner is hashing on an empty block is clearly τ / T; the fraction of the time the miner is hashing on a non-empty block is 1 - τ / T = (T - τ) / T. We will assume that every miner applies the same policy of producing empty SPV blocks for time 0 τ.Under these conditions, the expectation value of the blocksize is equal to the expectation value of the blocksize on the interval 0 Run Bitcoin Unlimited ( www.bitcoinunlimited.info

justusranvier



Offline



Activity: 1400

Merit: 1006









LegendaryActivity: 1400Merit: 1006 Re: Gold collapsing. Bitcoin UP. July 04, 2015, 11:35:25 PM #28124



https://gist.github.com/justusranvier/451616fa4697b5f25f60



(some modifications to the Bitcoin protocol required) Thoughts on how fraud proofs could make it possible for SPV clients to reject an invalid chain, even if the invalid chain contains the most PoW:(some modifications to the Bitcoin protocol required)

thezerg



Offline



Activity: 1246

Merit: 1000







LegendaryActivity: 1246Merit: 1000 Re: Gold collapsing. Bitcoin UP. July 05, 2015, 12:31:33 AM #28128 Quote from: justusranvier on July 04, 2015, 11:35:25 PM



https://gist.github.com/justusranvier/451616fa4697b5f25f60



(some modifications to the Bitcoin protocol required)

Thoughts on how fraud proofs could make it possible for SPV clients to reject an invalid chain, even if the invalid chain contains the most PoW:(some modifications to the Bitcoin protocol required)

Your modification to require the inputs to state which block it comes from is a clever way to reduce the addr does not exist proof. But I dont understand your subsequent complexity. If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right? Your modification to require the inputs to state which block it comes from is a clever way to reduce the addr does not exist proof. But I dont understand your subsequent complexity. If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right?

justusranvier



Offline



Activity: 1400

Merit: 1006









LegendaryActivity: 1400Merit: 1006 Re: Gold collapsing. Bitcoin UP. July 05, 2015, 12:41:24 AM #28129 Quote from: thezerg on July 05, 2015, 12:31:33 AM If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right? That's one way to do it, however even this can be shortened.



Right now with all the blocks < 1 MB it's not really a big deal to supply the entire block to prove that the referenced transaction doesn't exist, but it'd be nice to not require the entire block especially for when blocks are larger.



By adding a rule to new blocks that require all the transactions to be ordered by their hash, you don't need to supply the entire block to prove that the transaction doesn't exist.



It would be good to have that ordering requirement in place before blocks are allowed to grow to make sure that fraud proof size is bounded. That's one way to do it, however even this can be shortened.Right now with all the blocks < 1 MB it's not really a big deal to supply the entire block to prove that the referenced transaction doesn't exist, but it'd be nice to not require the entire block especially for when blocks are larger.By adding a rule to new blocks that require all the transactions to be ordered by their hash, you don't need to supply the entire block to prove that the transaction doesn't exist.It would be good to have that ordering requirement in place before blocks are allowed to grow to make sure that fraud proof size is bounded.

thezerg



Offline



Activity: 1246

Merit: 1000







LegendaryActivity: 1246Merit: 1000 Re: Gold collapsing. Bitcoin UP. July 05, 2015, 01:31:40 AM #28132 Quote from: justusranvier on July 05, 2015, 12:41:24 AM Quote from: thezerg on July 05, 2015, 12:31:33 AM If the txn input states that block B is the UTXO then the invalid proof is simply to supply B, right? That's one way to do it, however even this can be shortened.



Right now with all the blocks < 1 MB it's not really a big deal to supply the entire block to prove that the referenced transaction doesn't exist, but it'd be nice to not require the entire block especially for when blocks are larger.



By adding a rule to new blocks that require all the transactions to be ordered by their hash, you don't need to supply the entire block to prove that the transaction doesn't exist.



It would be good to have that ordering requirement in place before blocks are allowed to grow to make sure that fraud proof size is bounded.

That's one way to do it, however even this can be shortened.Right now with all the blocks < 1 MB it's not really a big deal to supply the entire block to prove that the referenced transaction doesn't exist, but it'd be nice to not require the entire block especially for when blocks are larger.By adding a rule to new blocks that require all the transactions to be ordered by their hash, you don't need to supply the entire block to prove that the transaction doesn't exist.It would be good to have that ordering requirement in place before blocks are allowed to grow to make sure that fraud proof size is bounded.

Makes sense... I'd recommend a quick line or two in your blog to explain that:



"In order to reduce the size of the fraud proof needed to show that a transaction input does not exist, additional information must be added to Bitcoin blocks to indicate the block which is the source of each outpoint used by every transaction in the block.



A node can provide the source block to the SPV client to prove or disprove the existence of this transaction. But with a few more changes we can provide a subset of the source block. This may become very important if block sizes increase.

"



Makes sense... I'd recommend a quick line or two in your blog to explain that:"In order to reduce the size of the fraud proof needed to show that a transaction input does not exist, additional information must be added to Bitcoin blocks to indicate the block which is the source of each outpoint used by every transaction in the block.A node can provide the source block to the SPV client to prove or disprove the existence of this transaction. But with a few more changes we can provide a subset of the source block. This may become very important if block sizes increase.

TPTB_need_war



Offline



Activity: 420

Merit: 255







Sr. MemberActivity: 420Merit: 255 Re: Gold collapsing. Bitcoin UP. July 05, 2015, 01:35:33 AM

Last edit: July 05, 2015, 02:30:24 AM by TPTB_need_war #28133



Quote from: kazuki49 on July 04, 2015, 04:51:06 PM Freemarket, in 2010 everyone could buy thousands of Bitcoin for almost nothing, what hindered it was, besides being relatively unknown at that point in time, few people actually believed cryptocurrencies could be a thing, with Monero its almost the same, the difference being its swiming in a sea of shitcoins and not many can see its potential, its the second Cryptonote coin, the first being heavily premined, and it was launched with a MIT licence, there is absolutely no merit to claims Monero stole anything, its like saying Ubuntu stole code from Debian, or that Apple stole from FreeBSD, so even though Monero market cap is low, few people will actually bother buying a large stack because it is not a 100% certain bet, but its clear there is nothing close to Monero as Zerocash/Zerocoin is vaporware and Bitcoin sidechains are like dragons.



My point about my personal preference where I employed the word "clusterfuck" to describe the hundreds of Cryptonote clones and that Monero's marketing (on these forums) to some extent had to vilify other CN clones in order to assert its dominance of CN clones, is instead I would have preferred to add features to CN that would naturally assert dominance over other CN clones. It felt to me like Monero used strong-armed community tactics instead to gain more critical mass than the other CN clones, yet not so much capabilities innovation (rather a lot of refinement which I assume includes a lot of fine grained performance innovations). And I am nearly certain this (lack of outstanding capabilities other than the on chain rings) is why Monero is not more widely adopted and will be the downfall stunted growth of Monero (and I say this with specific knowledge of capabilities that I think will subsume Monero very soon). And that is precisely why I would not prematurely release those features in a whitepaper for 1000s of clones to go implement simultaneously. And yet people criticize me for not spilling the beans before the software is cooked.



The marketing battle is not against the other "shitcoins" thus differentiating Monero from shit. Rather the battle is against Bitcoin core on who is going to own the chain that most of the BTC migrates to.



Also most of the interest in altcoins is not ideological, but rather speculative. We are in a down market now until BTC bottoms this October, so only getting mostly ideological investment in Monero, not speculative fever. This will turn after October, but it might be too late for Monero depending on the competition that might arise interim. However, I tend to think Monero will get a big boost after October in spite of any new competition, because it is a more stable codebase. As smooth pointed out, the greatest threat to breakage is in implementation error. It would behove Monero to be the first CN coin to apply my suggested fix to insure combinatorial analysis of partially overlapping rings can't occur.



P.S. CN is very important. The block size debate is iatrogenesis  any 'cure' is worse than the illness.My point aboutwhere I employed the word "clusterfuck" to describe the hundreds of Cryptonote clones and that Monero's marketing (on these forums) to some extent had to vilify other CN clones in order to assert its dominance of CN clones, is instead I would have preferred to add features to CN that would naturally assert dominance over other CN clones. It felt to me like Monero used strong-armed community tactics instead to gain more critical mass than the other CN clones, yet not so much capabilities innovation (rather a lot of refinement which I assume includes a lot of fine grained performance innovations). And I am nearly certain this (lack of outstanding capabilities other than the on chain rings) is why Monero is not more widely adopted and will be thestunted growth of Monero (and I say this with specific knowledge of capabilities that I think will subsume Monero very soon). And that is precisely why I would notrelease those features in a whitepaper for 1000s of clones to go implement simultaneously. And yet people criticize me for not spilling the beans before the software is cooked.The marketing battle is not against the other "shitcoins" thus differentiating Monero from shit. Rather the battle is against Bitcoin core on who is going to own the chain that most of the BTC migrates to.Also most of the interest in altcoins is not ideological, but rather speculative. We are in a down market now until BTC bottoms this October, so only getting mostly ideological investment in Monero, not speculative fever. This will turn after October, but it might be too late for Monero depending on the competition that might arise interim. However, I tend to think Monero will get a big boost after October in spite of any new competition, because it is a more stable codebase. As smooth pointed out, the greatest threat to breakage is in implementation error.P.S. CN is very important. UnunoctiumTesticles, iamback, contagion, formerly AnonyMint TheFascistMind , etc

laurentmt



Offline



Activity: 386

Merit: 251







Sr. MemberActivity: 386Merit: 251 Re: Gold collapsing. Bitcoin UP. July 05, 2015, 01:53:15 AM

Last edit: July 05, 2015, 02:30:20 AM by laurentmt #28136 Quote from: Peter R on July 04, 2015, 07:45:05 PM What this shows is that since the subtracted term, τ (1- P valid ), is strictly positive, the miner's expectation of revenue, <V>, is maximized if the time to verify the previous block is minimized (i.e., if τ is as small as possible).

Actually, <V> is also maximized if P valid == 1 (or P valid as close as possible to 1).

How to reach this result ? My humble proposal: make a deal with a few mining pools. Participants will never push invalid blocks to others participants. Blocks received from the cartel aren't checked before hashing a new block.



Conclusion: As the average blocksize gets larger, the time to verify the previous block also gets larger. This means that miners will be motivated to improve how quickly their nodes can perform the ECDSA operations needed to verify blocks or that they will be more motivated to trick the system .



EDIT:

Quote from: Peter R Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell?

Well, he seems a bit upset for now but I think his message is close from what I've tried to suggest with my comment.

We must analyze all the possibilities before jumping to a conclusion which backs our initial hypothesis. The point is valid for all of us, whatever our opinion on this blocksize issue. Actually, is also maximized if P== 1 (or Pas close as possible to 1).How to reach this result ? My humble proposal: make a deal with a few mining pools. Participants will never push invalid blocks to others participants. Blocks received from the cartel aren't checked before hashing a new block.Conclusion: As the average blocksize gets larger, the time to verify the previous block also gets larger. This means that miners will be motivated to improve how quickly their nodes can perform the ECDSA operations needed to verify blocksEDIT:Well, he seems a bit upset for nowbut I think his message is close from what I've tried to suggest with my comment.We must analyze all the possibilities before jumping to a conclusion which backs our initial hypothesis. The point is valid for all of us, whatever our opinion on this blocksize issue.

thezerg



Offline



Activity: 1246

Merit: 1000







LegendaryActivity: 1246Merit: 1000 Re: Gold collapsing. Bitcoin UP. July 05, 2015, 02:06:05 AM #28137 Quote from: Peter R on July 05, 2015, 01:46:56 AM



Quote ...

You've shown that you can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing.

Am I being sensitive or is this an unnecessarily spiteful reply from Greg Maxwell ?:

Its a strategy (implemented unconsciously by many) to limit participation to the select few. Unfortunately it tends to create a situation where only similar personalities contribute which is where we are today with the core devs, Gavin excepted.





I read his 21ms validation number but its weird because I was wondering just weeks ago why it was taking so long to sync a measly week of blockchain data and came to the conclusion that either the P2P code is complete garbage (compared to bittorrent for example) OR the validation cost is high (given my fan speed, I assumed it was validation). And if validation is so fast, why would these pools have custom code to skip it?



It will be interesting to look at stats gathering mode he mentions. Its a strategy (implemented unconsciously by many) to limit participation to the select few. Unfortunately it tends to create a situation where only similar personalities contribute which is where we are today with the core devs, Gavin excepted.I read his 21ms validation number but its weird because I was wondering just weeks ago why it was taking so long to sync a measly week of blockchain data and came to the conclusion that either the P2P code is complete garbage (compared to bittorrent for example) OR the validation cost is high (given my fan speed, I assumed it was validation). And if validation is so fast, why would these pools have custom code to skip it?It will be interesting to look at stats gathering mode he mentions.