avivz78



Offline



Activity: 21

Merit: 0







NewbieActivity: 21Merit: 0 New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 08:54:27 AM

Last edit: December 07, 2013, 06:09:17 PM by avivz78 #1



I'm a researcher at the Hebrew University in Israel, and I've been working with a student (Yonatan Sompolinsky) on a Bitcoin related paper. We have really exciting results that we are eager to share with the Bitcoin community. We would really like to get constructive feedback on our work from the many smart people in the community, which is why I am posting here.



Here is a link to the full paper:

Title: Accelerating Bitcoin's Transaction Processing (Fast Money Grows on Trees, Not Chains)



Edit: Thanks for all the good feedback! Recap of main issues and questions added below.



As the paper is quite lengthy, and is written for an academic audience (which, sadly, is not usually familiar with the protocol) we thought it best to also provide a quick explanation of our results aimed at Bitcoiners:



tl;dr: We suggest a protocol modification to the block chain that securely allows blocks to be generated around once per second, can handle over 200 transactions per second at these rates, and consumes under 0.5 MBps in terms of bandwidth (less at lower rates than 200 TPS). All of this with no increased susceptability to 50% attacks. This essentially solves the problem that caused Satoshi to set the 10 minute target for the block creation rate. We also analyze the number of transactions per second Bitcoin can handle with and without our modification. We note that block propagation times are the primary obstacle for scalability.





A more detailed explanation follows bellow. The primary goal of our research is to address Bitcoin's ability to process transactions quickly, and in large volumes. Here are our main findings:



Scalibility, Delays & Security:





We begin our paper by examining the exact effects of high transaction rates on Bitcoin's security (Following in the footsteps of previous work by Decker and Wattenhofer). The number of transactions per second (TPS) that Bitcoin can handle is limited by two main things: 1) The block creation rate (of 1 block every 10 minutes) and 2) the block size limit (currently at a 1MB default). These two parameters combine to limit the number of transactions per second that Bitcoin can process. The straightforward way to increase the TPS is to either increase the block size, or to increase the block creation rate. Both of these changes are controversial, and for good reason: both may affect the security guarantees of the protocol. First, let us consider an increase in the number of blocks per second (e.g., Litecoin's blocks that are created every 2.5 minutes, or even Fastcoin with its extreme 12 second blocks). Because blocks are created quickly, many conflicting blocks are created. Most will end up off the blockchain. The same symptom occurs when the block size is increased: large blocks take longer to propagate through the network (due to bandwidth limitations) and blocks that are created in the meantime are likely built on top of older blocks, i.e., they will be wasted.



The fact that many blocks are wasted lowers the security of the network and makes it more susceptible to 50% attacks. For example, if half the blocks are wasted in this manner, the network essentially wastes half its hash power on blocks that do not contribute confirmations to transactions. An attacker which is centralized and has no delays, can execute a so-called 50% attack with only slightly more than 33% of the hash power. This is because it can easily create longer chains than the rest of the network (botnets that still suffer from the effect of internal delays are less effective than centralized attackers).

Using different techniques, we analyze how many blocks end up in the chain, and how many are discarded, and use this to estimate the change in security for different parameters. Among other results, we show that transmitting blocks that contain only transaction hashes (instead of full transaction records) will greatly help scalability (i.e., this is not just a 2-fold saving in bandwidth, but rather a 16-fold increase in the number of transactions per second!).



Our suggested protocol modification (which appears in section 8 of our paper):



Since high transaction rates imply many conflicting blocks are created, it would be quite useful if these blocks were not really lost. In fact, each block can be seen as supporting not just transactions inside it, but also those embedded in previous blocks. Even if a block is not in the main chain, we can still count the confirmations it gives previous blocks as valid. This is the basis of our proposed modification, which we call the "Greedy Heaviest-Observed Sub-Tree" chain selection rule.



Roughly speaking, since each block contains a hash of its predecessor, all blocks form a tree structure, rooted at the Genesis Block. Bitcoin currently selects the accepted history as the longest (or rather heaviest) chain in the tree. We suggest another approach: At each fork, pick the sub-tree containing the most blocks (or rather the blocks with greatest combined difficulty). Do this repeatedly until reaching a leaf. The path traversed is the chain that nodes should accept. But how does this help? Notice now, that an attacker that wishes to change the main-chain selected by the algorithm needs to make us change our decision in one of the forks. To do so, he needs to build more blocks than are contained in the entire sub-tree (and not just more blocks than are contained in the longest chain!).



Here is the pseudo-code of the GHOST chain selection rule:



1. SET B <- Genesis Block.

2. IF B has no successors: RETURN(B).

Else: SET B <- Child of B with heaviest sub-tree.

3. GOTO 2



The cost of such a modification: At low block creation rates, and small block sizes there is almost no difference between the longest-chain rule and the GHOST-rule. There is no cost. Both are almost identical since the longest chain in these cases is also the heaviest subtree. In high transaction rates, GHOST builds slightly less blocks in its main-chain (because it doesn't always extend the longest chain), thus slightly lowering the number of transactions accepted per second, but it does so more securely! Delays and many off-chain blocks no longer make it more susceptible to 50% attacks. This implies that we can increase the block creation rates and block size to levels that were previously too risky and easily make up for the loss in transaction volumes. In fact, we estimate that 1 second blocks can be easily combined with rates of over 200 TPS. This implies quite fast authorization times, but much more importantly, an increased granularity in confirmations. Even 1 confirmation gives some level of certainty regarding irreversibly, and it comes almost instantly when blocks are generated every second.



Since Bitcoin's security relies primarily on the number of confirmations received instead of on elapsed time, we end up getting irreversibility of transactions with very high probability in far less than 10 minutes.



Recap of main issues and questions raised:

I'd like to clarify one thing: We do not claim to be able to solve every problem one might think of with regards to Bitcoin's various aspects (incentives, mining centralization, etc.) so I think we are criticized for problems that are already there in the protocol. Our main claim is that security gets very bad if high transaction rates occur, and the GHOST modification fixes that (and that alone). If you are uncomfortable with 1 second blocks, the modification is equally valuable at lower block rates. The need to address high transaction rates is still there.



Some specific comments:



1) Advantage of nodes with low latency. Nodes that can reach the rest of the network quickly have more blocks on the main chain and less orphans.



Answer: This is a serious problem that the current bitcoin protocol will face if high transaction rates (i.e., large block sizes) are sent through the system. Strong miners will be able to get more than their proportional share of the blocks. We don't improve things, but the GHOST modification doesn't hurt either. The only improvement that we offer, is that the 50% attack does not get worse in these scenarios.



2) DDOS attack: Malicious nodes can mine blocks above the genesis block at difficulty 1 and spam the network



Answer: This too as a problem that the current bitcoin protocol faces. It's why checkpoints are constantly put in place. (anyone can currently claim to have a chain with harder proofs of work, and just start sending difficulty 1 blocks in a long chain that never ends. See here for gmaxwell's explanation:

https://bitcointalk.org/index.php?topic=194078.msg2014204#msg2014204.

Checkpoints can also be applied to our solution.



Edit: several people have noted that there are other solutions besides checkpoints (not currently implemented). We'll need to see if these can be adapted here as well. I currently have no reason to believe this won't be possible.



3) SPV clients need more bandwidth + storage if there are blocks every second (given that there will be 600 times as many block headers to download).



Answer: This is true. But the fix should probably be to find better mechanisms for the SPV nodes. For example, it should be possible to probabilistically verify a long chain of block headers without actually downloading all of them. Here's a basic idea: Have the full node build a merkle tree of all blocks +difficulties, and send it to the SPV client. The SPV client picks blocks at random from the chain (weighed by the difficulty the should have) and requests merkle branches to them (from the root it had sent) + checks their difficulty. just a few checks would be enough to verify that there is indeed enough work stored in the chain (with very high probability).



4) Implemenation: How would nodes know about orphaned blocks if they were offline for a while?



Answer: We suggest that each block will also contain hashes of off-chain (orphaned) blocks in its header (only those not written in blocks it built upon). This way, the main chain also contains a record of all off-chain blocks, so it knows what to ask for. Another reason we think it is a good idea: We think difficulty should be retargeted according to the total number of blocks built per second (incl. orphans), not based on the length of the longest chain (retargetting for that is harder at high rates).



5) Storage issue: a lot of space will be needed



Answer: Solutions like the mini-blockchain are still applicable, and allow you to keep only a record of recent events. Still there is no getting around the fact that high transaction rates will require a lot of storage (It's not easy scaling to the size of Paypal / VISA). Again, this is not caused by our modification. You do not need to keep the contents of orphaned blocks -- only their headers to verify difficulty. Having more blocks in the chain per second merely implies each one of them will have fewer transactions inside it. The only added cost is keeping more headers. Hi all,I'm a researcher at the Hebrew University in Israel, and I've been working with a student (Yonatan Sompolinsky) on a Bitcoin related paper. We have really exciting results that we are eager to share with the Bitcoin community. We would really like to get constructive feedback on our work from the many smart people in the community, which is why I am posting here.Here is a link to the full paper: http://www.cs.huji.ac.il/~avivz/pubs/13/btc_scalability_full.pdf Title: Accelerating Bitcoin's Transaction Processing (Fast Money Grows on Trees, Not Chains)As the paper is quite lengthy, and is written for an academic audience (which, sadly, is not usually familiar with the protocol) we thought it best to also provide a quick explanation of our results aimed at Bitcoiners:A more detailed explanation follows bellow. The primary goal of our research is to address Bitcoin's ability to process transactions quickly, and in large volumes. Here are our main findings:We begin our paper by examining the exact effects of high transaction rates on Bitcoin's security (Following in the footsteps of previous work by Decker and Wattenhofer). The number of transactions per second (TPS) that Bitcoin can handle is limited by two main things: 1) The block creation rate (of 1 block every 10 minutes) and 2) the block size limit (currently at a 1MB default). These two parameters combine to limit the number of transactions per second that Bitcoin can process. The straightforward way to increase the TPS is to either increase the block size, or to increase the block creation rate. Both of these changes are controversial, and for good reason: both may affect the security guarantees of the protocol. First, let us consider an increase in the number of blocks per second (e.g., Litecoin's blocks that are created every 2.5 minutes, or even Fastcoin with its extreme 12 second blocks). Because blocks are created quickly, many conflicting blocks are created. Most will end up off the blockchain. The same symptom occurs when the block size is increased: large blocks take longer to propagate through the network (due to bandwidth limitations) and blocks that are created in the meantime are likely built on top of older blocks, i.e., they will be wasted.The fact that many blocks are wasted lowers the security of the network and makes it more susceptible to 50% attacks. For example, if half the blocks are wasted in this manner, the network essentially wastes half its hash power on blocks that do not contribute confirmations to transactions. An attacker which is centralized and has no delays, can execute a so-called 50% attack with only slightly more than 33% of the hash power. This is because it can easily create longer chains than the rest of the network (botnets that still suffer from the effect of internal delays are less effective than centralized attackers).Using different techniques, we analyze how many blocks end up in the chain, and how many are discarded, and use this to estimate the change in security for different parameters. Among other results, we show that transmitting blocks that contain only transaction hashes (instead of full transaction records) will greatly help scalability (i.e., this is not just a 2-fold saving in bandwidth, but rather a 16-fold increase in the number of transactions per second!).Since high transaction rates imply many conflicting blocks are created, it would be quite useful if these blocks were not really lost. In fact, each block can be seen as supporting not just transactions inside it, but also those embedded in previous blocks. Even if a block is not in the main chain, we can still count the confirmations it gives previous blocks as valid. This is the basis of our proposed modification, which we call the "Greedy Heaviest-Observed Sub-Tree" chain selection rule.Roughly speaking, since each block contains a hash of its predecessor, all blocks form a tree structure, rooted at the Genesis Block. Bitcoin currently selects the accepted history as the longest (or rather heaviest) chain in the tree. We suggest another approach: At each fork, pick the sub-tree containing the most blocks (or rather the blocks with greatest combined difficulty). Do this repeatedly until reaching a leaf. The path traversed is the chain that nodes should accept. But how does this help? Notice now, that an attacker that wishes to change the main-chain selected by the algorithm needs to make us change our decision in one of the forks. To do so, he needs to build more blocks than are contained in the entire sub-tree (and not just more blocks than are contained in the longest chain!).Here is the pseudo-code of the GHOST chain selection rule:1. SET B

AWARD-WINNING

CASINO CRYPTO EXCLUSIVE

CLUBHOUSE 1500+

GAMES 2 MIN

CASH-OUTS 24/7

SUPPORT 100s OF

FREE SPINS PLAY NOW vertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertised sites are not endorsed by the Bitcoin Forum. They may be unsafe, untrustworthy, or illegal in your jurisdiction. Advertise here.

avivz78



Offline



Activity: 21

Merit: 0







NewbieActivity: 21Merit: 0 Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 09:36:56 AM #4 stephwen: We've submitted it, but it is still unpublished.



trout: The protocol still ends up picking a main chain, and fees + rewards still go to these blocks.

It may be interesting to reward off-chain blocks as well, but we didn't find a nice way to do this without ruining the money creation schedule.

mmitech



Offline



Activity: 1148

Merit: 1001





things you own end up owning you







LegendaryActivity: 1148Merit: 1001things you own end up owning you Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 10:24:28 AM #8 I didn't read all pages (yet), but I think that you did a great job there, did you send this to the dev team ?



Peter Todd





Offline



Activity: 1106

Merit: 1045







LegendaryActivity: 1106Merit: 1045 Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 10:26:21 AM #9





Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some I'm going to have to sit down and read the paper more carefully to have a solid opinion on it, but my meta-opinion is that I think it's great to see people taking scalability and blocksize seriously on an academic level. We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all thebeing done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work pdf ) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well. BTC: 1FCYd7j4CThTMzts78rh6iQJLBRGPW9fWv PGP: 7FAB114267E4FA04

EhVedadoOAnonimato



Offline



Activity: 630

Merit: 500









Hero MemberActivity: 630Merit: 500 Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 10:30:01 AM #10



Could you tell me if your solution has anything to do with what is proposed by Maged in this thread:



A block-tree could eventually render freezing attacks more difficult. Such advantage alone already makes it interesting. If as a plus it also improves performance significantly, then it's definitely something that should be studied more seriously. OP, I haven't read your academic paper, but I'm already interested by the summary you just made.Could you tell me if your solution has anything to do with what is proposed by Maged in this thread: https://bitcointalk.org/index.php?topic=57647.0;all A block-tree could eventually render freezing attacks more difficult. Such advantage alone already makes it interesting. If as a plus it also improves performance significantly, then it's definitely something that should be studied more seriously.

tholenst



Offline



Activity: 24

Merit: 0







NewbieActivity: 24Merit: 0 Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 10:31:59 AM #11 From what I understand, the main idea of the paper is the change of selecting the "valid" blockchain.



Currently, when a new block arrives, figuring out whether the current valid blockchain changes costs O(1) processing time: for each block you need to remember what the cost of the chain is until this block. Right now, in your scheme, I do not see a way to improve on Theta(depth) processing time for each block which comes in. Just wondering: do you see a better way to do this? I wouldn't be surprised if an O(log(depth)) data structure exists.

Daily Anarchist



Offline



Activity: 614

Merit: 500









Hero MemberActivity: 614Merit: 500 Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 10:35:38 AM #12 Quote from: Peter Todd on December 06, 2013, 10:26:21 AM





Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some

I'm going to have to sit down and read the paper more carefully to have a solid opinion on it, but my meta-opinion is that I think it's great to see people taking scalability and blocksize seriously on an academic level. We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all thebeing done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work pdf ) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

I'm not a developer, so excuse my ignorance. But wouldn't 1 second blocks essentially make solo mining much more plausible? It seems at that rate even lowly cpu miners could pull of solo mining. Might not be profitable per se, but the chances of at least mining a block solo would be greatly increased. I'm not a developer, so excuse my ignorance. But wouldn't 1 second blocks essentially make solo mining much more plausible? It seems at that rate even lowly cpu miners could pull of solo mining. Might not be profitable per se, but the chances of at least mining a block solo would be greatly increased. Discover anarcho-capitalism today!

avivz78



Offline



Activity: 21

Merit: 0







NewbieActivity: 21Merit: 0 Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 10:39:00 AM #13 Quote from: Peter Todd on December 06, 2013, 10:26:21 AM work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some

Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all thebeing done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work pdf ) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

You make a really good point. Decentralization is hurt at higher transaction rates, and better incentives are needed. We do mention it as a problem that remains at the very end of the paper (issues related to decentralization). The problem seems inherent to both our suggestion and to Bitcoin's current implementation.



Let me just say, that with small block sizes (i.e., when full bloom filters are implemented) it will be very hard to distribute blocks faster than the network does already. IMO, this is less of a problem than people think. You make a really good point. Decentralization is hurt at higher transaction rates, and better incentives are needed. We do mention it as a problem that remains at the very end of the paper (issues related to decentralization). The problem seems inherent to both our suggestion and to Bitcoin's current implementation.Let me just say, that with small block sizes (i.e., when full bloom filters are implemented) it will be very hard to distribute blocks faster than the network does already. IMO, this is less of a problem than people think.

Peter Todd





Offline



Activity: 1106

Merit: 1045







LegendaryActivity: 1106Merit: 1045 Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 10:42:06 AM #14 Quote from: Daily Anarchist on December 06, 2013, 10:35:38 AM Quote from: Peter Todd on December 06, 2013, 10:26:21 AM





Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all the work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some

I'm going to have to sit down and read the paper more carefully to have a solid opinion on it, but my meta-opinion is that I think it's great to see people taking scalability and blocksize seriously on an academic level. We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all thebeing done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work pdf ) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

I'm not a developer, so excuse my ignorance. But wouldn't 1 second blocks essentially make solo mining much more plausible? It seems at that rate even lowly cpu miners could pull of solo mining. Might not be profitable per se, but the chances of at least mining a block solo would be greatly increased.

I'm not a developer, so excuse my ignorance. But wouldn't 1 second blocks essentially make solo mining much more plausible? It seems at that rate even lowly cpu miners could pull of solo mining. Might not be profitable per se, but the chances of at least mining a block solo would be greatly increased.

Yeah, but if the result of the solution is such that solo-mining earns you only 50% of the revenue that hashing in a large, 25% hashing power pool got you, you're incentive would be to mine in the pool instead leading to centralization. The problem is we know that pools already have higher revenue per unit hashing power; my suspicion is the solution proposed by the paper makes that problem even worse. But I've got a flight to catch in an hour or two and will get a chance to think about it all more carefully later. Yeah, but if the result of the solution is such that solo-mining earns you only 50% of the revenue that hashing in a large, 25% hashing power pool got you, you're incentive would be to mine in the pool instead leading to centralization. The problem is we know that pools already have higher revenue per unit hashing power; my suspicion is the solution proposed by the paper makes that problem even worse. But I've got a flight to catch in an hour or two and will get a chance to think about it all more carefully later. BTC: 1FCYd7j4CThTMzts78rh6iQJLBRGPW9fWv PGP: 7FAB114267E4FA04

caveden



Offline



Activity: 1106

Merit: 1002









LegendaryActivity: 1106Merit: 1002 Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 10:51:38 AM #15 Quote from: Peter Todd on December 06, 2013, 10:26:21 AM We've got some incredibly naive ideas floating around the Bitcoin space right now - like the idea that just removing the blocksize limit entirely will work out fine - and we need research into scalability solutions that considers incentives and the resulting security carefully. That kind of reasoning tends to involve a lot of math, rather than platitudes about "market forces"



Wow... This presumption of knowledge of yours reminded me of this brilliant quote, perfectly applicable here:



Quote from: Friedrich von Hayek The curious task of economics is to demonstrate to men how little they really know about what they imagine they can design.

The subjective reasoning of millions can't involve "lots of math", and can't be modeled by it. You can only reach meaningful conclusions through praxeology. And, if you were to actually try that, you'd probably recognize one day that you should not try to impose top-down rulings instead of organic bottom-up organizations, as if people were peons on a chessboard to be manipulated by enlightened "planners". Anyways... that's not the topic here, I don't want to derail this very interesting thread, so I'll stop here. It would be nice if you could also try not to let insults on the air like this every time you can... Wow... This presumption of knowledge of yours reminded me of this brilliant quote, perfectly applicable here:The subjective reasoning of millions can't involve "lots of math", and can't be modeled by it. You can only reach meaningful conclusions through praxeology. And, if you were to actually try that, you'd probably recognize one day that you should not try to impose top-down rulings instead of organic bottom-up organizations, as if people were peons on a chessboard to be manipulated by enlightened "planners". Anyways... that's not the topic here, I don't want to derail this very interesting thread, so I'll stop here. It would be nice if you could also try not to let insults on the air like this every time you can...

Peter Todd





Offline



Activity: 1106

Merit: 1045







LegendaryActivity: 1106Merit: 1045 Re: New paper: Accelerating Bitcoin's Trasaction Processing December 06, 2013, 10:51:59 AM #16 Quote from: avivz78 on December 06, 2013, 10:39:00 AM Quote from: Peter Todd on December 06, 2013, 10:26:21 AM work being done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some

Speaking of, while the paper presents a solution preserving security guarantees, a quick skim of it doesn't seem to indicate they take into account the incentives around block propagation. If you wind up with a situation well large, centralized, mining pools earn more money as a part of this high-speed propagation game, even though in theory all thebeing done contributes towards 51% security, the overall result may be a serious negative due to the incentives towards centralization. Lately I've done some work pdf ) on that topic; it's a very important crypto-currency design consideration that I'd like to see other people analyzing as well.

You make a really good point. Decentralization is hurt at higher transaction rates, and better incentives are needed. We do mention it as a problem that remains at the very end of the paper (issues related to decentralization). The problem seems inherent to both our suggestion and to Bitcoin's current implementation.



Let me just say, that with small block sizes (i.e., when full bloom filters are implemented) it will be very hard to distribute blocks faster than the network does already. IMO, this is less of a problem than people think.

You make a really good point. Decentralization is hurt at higher transaction rates, and better incentives are needed. We do mention it as a problem that remains at the very end of the paper (issues related to decentralization). The problem seems inherent to both our suggestion and to Bitcoin's current implementation.Let me just say, that with small block sizes (i.e., when full bloom filters are implemented) it will be very hard to distribute blocks faster than the network does already. IMO, this is less of a problem than people think.

Oh good to hear your thinking about this! I agree with you that there aren't easy answers yet.



Something to keep in mind is that the network distributing blocks != miners distributing blocks. Large miners and mining pools can and do peer to each other directly, so propagation delays on the global P2P bitcoin network don't affect them the same way as smaller miners who are not in a position to do that. When the block "interval" is getting down to just a few seconds, even stuff like physically locating your mining pool hashing power closer to other concentrations of hashing power may net you more revenue, with obvious problems for decentralization. You might have a small mining setup, and want to contribute that decentralized hashing power, but find you can't profitably because all the big mining pools already have peering agreements with each other and dedicated high-speed connections.



Also I'm not convinced that bloom filters are the way to do block distribution with only txid hashes; peers simply maintaining lists of what transactions each peer knows about and transmitting short tags instead is probably more efficient in terms of bandwidth as it is not probabilistic.



Finally, I've got a only contain tx hashes at a fundamental level; transactions don't get verified at all in the scheme and the blockchain is strictly for proof-of-publication. Seems to me that it'd be well-suited to your idea as the lack of verification makes it easy to have actual tx publication be something done after the fact. (though note my comments about issues when there is a lack of incentives to actually publish the data that is being committed) Oh good to hear your thinking about this! I agree with you that there aren't easy answers yet.Something to keep in mind is that the network distributing blocks != miners distributing blocks. Large miners and mining pools can and do peer to each other directly, so propagation delays on the global P2P bitcoin network don't affect them the same way as smaller miners who are not in a position to do that. When the block "interval" is getting down to just a few seconds, even stuff like physically locating your mining pool hashing power closer to other concentrations of hashing power may net you more revenue, with obvious problems for decentralization. You might have a small mining setup, and want to contribute that decentralized hashing power, but find you can't profitably because all the big mining pools already have peering agreements with each other and dedicated high-speed connections.Also I'm not convinced that bloom filters are the way to do block distribution with only txid hashes; peers simply maintaining lists of what transactions each peer knows about and transmitting short tags instead is probably more efficient in terms of bandwidth as it is not probabilistic.Finally, I've got a proposal for a blockchain where blocks themselves couldcontain tx hashes at a fundamental level; transactions don't get verified at all in the scheme and the blockchain is strictly for proof-of-publication. Seems to me that it'd be well-suited to your idea as the lack of verification makes it easy to have actual tx publication be something done after the fact. (though note my comments about issues when there is a lack of incentives to actually publish the data that is being committed) BTC: 1FCYd7j4CThTMzts78rh6iQJLBRGPW9fWv PGP: 7FAB114267E4FA04