gmaxwell

Legendary





Offline



Activity: 3178

Merit: 4298









ModeratorLegendaryActivity: 3178Merit: 4298 Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 02:48:54 AM

Last edit: August 02, 2017, 10:32:42 PM by gmaxwell Merited by Welsh (4), Kakmakr (1) #1



How much less bandwidth does blocksonly use in practice? I recently measured this using two techniques: Once by instrumenting a node to measure bandwidth used for blocks vs all other traffic, and again by repeatedly running in both modes for a day and monitoring the hosts total network usage; both modes gave effectively the same result.



How much is the savings? Blocksonly reduced the node's bandwidth usage by 88%.



A significant implication of this is that any scheme for bandwidth reduction which works by using already relayed transactions to reduce the size of transmitted blocks can _AT MOST_ reduce the overall bandwidth usage by 12%-- assuming that differential compression achieved an "infinity-fold" improvement. (An example of



Why does relay use so much bandwidth? The fundamental reason for this is because relay in the Bitcoin Protocol is linear in the number of transactions and the number of peers. (E.g. bandwidth is a function of Transactions * Peers). The protocol uses an 'efficient' INV scheme --- but this scheme only reduces the bandwidth by a constant factor: For every transaction a 36 byte long INV is sent (or received) from _every_ peer, and the full transaction is only sent once. Since the INV is a significant fraction of the transaction size (e.g. 36 bytes vs 300) this means that every 10 or so peers uses just as much bandwidth as sending the full transaction every time would take.



For the subset of nodes where blocksonly meets their needs it's available now and gives pretty much the greatest possible bandwidth savings. Full stop. For blocks the INV process is much more efficient because the ratio of INV size to a megabyte block is so large. But what can we do for all the other cases?



In the literature there are schemes for quite efficient gossip but there seems to be a trade-off between efficiency and attack robustness (for example, you could arrange nodes in a minimum spanning tree rooted at some super node(s) which would be optimal for repeated bandwidth but insanely unreliable and vulnerable to attacks). Even the simple INV scheme is vulnerable to attack: peers can delay transactions by INVing but then sending the data only after a long delay. With ephemeral anonymous peers, attack resistance is very important to us-- but fortunately we don't need to achieve optimal efficiency.



One possible scheme I've been discussing (and working on) for a while is mempool reconciliation. My work here was originally motivated by the desire to avoid relay costs for transactions which are just going to be quickly replace or fall below the relay threshold, but this can also be applied to drastically reduce relay costs.



The idea is this, at random intervals a node will ask one of it's peers for a sketch of the top X MBytes of it's mempool. In it's request it could also signal that it's interested only in transactions whos ancestor feerate is over some minimum threshold, along with information about the size and feerate of its own mempool. The returned sketch is an IBLT (see



The requester then attempts to reconstruct the IDs using the content of it's own mempool. If it is unable to reconstruct all of the IDs, it requests from another random peer with a different seed.



When the node has multiple IBLTs it can use the partial solutions from each of them to help solve the other ones.



As it learns new txids that it was previously unaware of it then fetches them via getdata. When it completes a reconstruction it can INV any missing transactions towards the peers that didn't have them.



(To avoid IBLT batching delays slowing transaction propagation too much, two peers can be elected to receive INVs for every transaction immediately; similar to the existing trickle-for-privacy scheme).



Most of the academic literature on IBLTs is concerned about parameter selections that minimize the asymptotic overhead (size of IBLT vs set difference) for perfect reconstruction of the set. These parameters, however, result in quite high probabilities for complete failure. In our case, being able to recover some of a set is fine both because mempools don't need to be completely consistent, and requests to other peers will help recover the missing entries.



A similar pattern happened (for the same reason) in the invention of



But right now I'm not aware of any research on what these optimal order distributions would look like for IBLTs given assumptions around capacity ratios... so I think there is a fair amount of basic science that needs to be worked out in order to really nail down a good design. I've been trying to work on making more progress here for a couple months, but the toxic environment makes the grind a little difficult. I thought I would checkpoint my thinking here in the hope that someone would have some more thoughts to contribute.



In any case, this scheme would avoid the quadratic-like behavior of relay bandwidth. It would let nodes trade latency off vs relay overhead, and it would allow for a graceful way of handling the highest priority transactions first. -- transactions a dozen or more blocks deep in the mempool are not going to get mined any time soon, so if they have lower relay latency from reconciling them less frequently that is no great harm. I believe it could do so without substantial increases in vulnerability to attack. The same software infrastructure could also be used for bandwidth minimized block transfer for nodes that do have mempools (otherwise blocksonly gives optimal bandwidth gains), though latency minimization is much better accomplished via techniques like Matt's efficient block relay protocol-- since the most important thing to minimize latency is to minimize roundtrips.

Bitcoin Core 0.12 introduced a new blocksonly setting. When set to blocksonly a node behaves normally but sends and receives no lose transactions; instead it handles only complete blocks. There are many applications for nodes where only confirmed transactions are interesting, and a node which still verifies and forwards blocks still contributes to network health-- less, perhaps, than one that relays transactions: but it also consumes fewer resources to begin with. An additional downside they don't get the latency advantages of signature caching since every transaction they see is totally new to them-- this isn't something miners should use.How much less bandwidth does blocksonly use in practice? I recently measured this using two techniques: Once by instrumenting a node to measure bandwidth used for blocks vs all other traffic, and again by repeatedly running in both modes for a day and monitoring the hosts total network usage; both modes gave effectively the same result.How much is the savings? Blocksonly reduced the node's bandwidth usage byA significant implication of this is that any scheme for bandwidth reduction which works by using already relayed transactions to reduce the size of transmitted blocks can _AT MOST_ reduce the overall bandwidth usage by 12%-- assuming that differential compression achieved an "infinity-fold" improvement. (An example of Amdahl's law Why does relay use so much bandwidth? The fundamental reason for this is because relay in the Bitcoin Protocol is linear in the number of transactions and the number of peers. (E.g. bandwidth is a function of Transactions * Peers). The protocol uses an 'efficient' INV scheme --- but this scheme only reduces the bandwidth by a constant factor: For every transaction a 36 byte long INV is sent (or received) from _every_ peer, and the full transaction is only sent once. Since the INV is a significant fraction of the transaction size (e.g. 36 bytes vs 300) this means that every 10 or so peers uses just as much bandwidth as sending the full transaction every time would take.For the subset of nodes where blocksonly meets their needs it's available now and gives pretty much the greatest possible bandwidth savings. Full stop. For blocks the INV process is much more efficient because the ratio of INV size to a megabyte block is so large. But what can we do for all the other cases?In the literature there are schemes for quite efficient gossip but there seems to be a trade-off between efficiency and attack robustness (for example, you could arrange nodes in a minimum spanning tree rooted at some super node(s) which would be optimal for repeated bandwidth but insanely unreliable and vulnerable to attacks). Even the simple INV scheme is vulnerable to attack: peers can delay transactions by INVing but then sending the data only after a long delay. With ephemeral anonymous peers, attack resistance is very important to us-- but fortunately we don't need to achieve optimal efficiency.One possible scheme I've been discussing (and working on) for a while is mempool reconciliation. My work here was originally motivated by the desire to avoid relay costs for transactions which are just going to be quickly replace or fall below the relay threshold, but this can also be applied to drastically reduce relay costs.The idea is this, at random intervals a node will ask one of it's peers for a sketch of the top X MBytes of it's mempool. In it's request it could also signal that it's interested only in transactions whos ancestor feerate is over some minimum threshold, along with information about the size and feerate of its own mempool. The returned sketch is an IBLT (see this reddit post for an EL5 explanation that I created for IBLT) of some size estimated by the sender based on the information sent by the requester. Unlike the efficient block transmission IBLT proposals, this IBLT only transmits transaction IDs-- which will avoid the fragmentation overhead needed in sending whole transactions.The requester then attempts to reconstruct the IDs using the content of it's own mempool. If it is unable to reconstruct all of the IDs, it requests from another random peer with a different seed.When the node has multiple IBLTs it can use the partial solutions from each of them to help solve the other ones.As it learns new txids that it was previously unaware of it then fetches them via getdata. When it completes a reconstruction it can INV any missing transactions towards the peers that didn't have them.(To avoid IBLT batching delays slowing transaction propagation too much, two peers can be elected to receive INVs for every transaction immediately; similar to the existing trickle-for-privacy scheme).Most of the academic literature on IBLTs is concerned about parameter selections that minimize the asymptotic overhead (size of IBLT vs set difference) for perfect reconstruction of the set. These parameters, however, result in quite high probabilities for complete failure. In our case, being able to recover some of a set is fine both because mempools don't need to be completely consistent, and requests to other peers will help recover the missing entries.A similar pattern happened (for the same reason) in the invention of fountain codes -- (one can approximately think of a IBLT as the dual of a fountain code). To achieve better odds of reconstruction, the order of nodes in the graph (isomorphic to the number of hash buckets in IBLT) should be randomly selected from a special distribution, instead of a constant. The effect is that the mixture of node orders reduces the probability that a decode gets stuck, because it is more likely that there will be be some entries that have the optimal density. For IBLT this can be done by using the hash of the set member to pick the number of buckets for that set just like it is used to pick which buckets are used.But right now I'm not aware of any research on what these optimal order distributions would look like for IBLTs given assumptions around capacity ratios... so I think there is a fair amount of basic science that needs to be worked out in order to really nail down a good design. I've been trying to work on making more progress here for a couple months, but the toxic environment makes the grind a little difficult. I thought I would checkpoint my thinking here in the hope that someone would have some more thoughts to contribute.In any case, this scheme would avoid the quadratic-like behavior of relay bandwidth. It would let nodes trade latency off vs relay overhead, and it would allow for a graceful way of handling the highest priority transactions first. -- transactions a dozen or more blocks deep in the mempool are not going to get mined any time soon, so if they have lower relay latency from reconciling them less frequently that is no great harm. I believe it could do so without substantial increases in vulnerability to attack. The same software infrastructure could also be used for bandwidth minimized block transfer for nodes that do have mempools (otherwise blocksonly gives optimal bandwidth gains), though latency minimization is much better accomplished via techniques like Matt's efficient block relay protocol-- since the most important thing to minimize latency is to minimize roundtrips.

jl777



Offline



Activity: 1176

Merit: 1090







LegendaryActivity: 1176Merit: 1090 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 04:17:04 AM #4 Blocksonly, very cool and practical.



A way to reduce the size of the INV would be to just send the lowest 64bits. Yes I know, the hash collisions, the hash collisions. But the odds of this is quite rare, just have a combined hash of all the txids also. Then a node can find out with pretty good probability if another node knows about a tx it doesnt and if all the tx are in common the combined hash can be used to verify this. In the rare event all the 64bit truncations match, but the combined hash doesnt, then the full txids could be exchanged.



A way to avoid the quadratic behavior is for each node to create a network topology map and have a mechanism to route the message to the likely nodes along the likely paths. Before you say, the privacy, the privacy. All this information is already known to the attacker, so what harm for honest nodes to take advantage of the local topology?



With a combined push protocol on top of a streamlined pull to fill in the gaps, I think the behavior would reduce from quadratic to something a lot more scalable.



In the extreme case of a single node that is connected to all 6000 nodes, it could broadcast a tx to all nodes at once. The problem with this is the redundant packets going to nodes that already have it. A naive approach would be to have a 6000 bit bitmap indicating what nodes have seen a packet and use that for optimizing the packet broadcasting. Clearly this can be abused, but as a starting point to get a tx broadcast with minimal redudancy feels like a good starting point.



Then with a protocol similar to the current INV process, the nodes can still verify they have all the tx their peers have, so the broadcast seems like it just seeds that many more peers with the tx and avoids local retransmission between nodes.



James



P.S. broadcasting would of course point directly at the node that broadcast, so if privacy of that is a concern, then a privacy-centric node would need to have a different node broadcast the tx and make sure the tx gets to that node in a secure way http://www.digitalcatallaxy.com/report2015.html

100+ page annual report for SuperNET 100+ page annual report for SuperNET

gmaxwell

Legendary





Offline



Activity: 3178

Merit: 4298









ModeratorLegendaryActivity: 3178Merit: 4298 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 04:29:56 AM #5



Instead of using transaction hashes as identifiers, pass the transaction through a cryptographic permutation (e.g. a large block blockcipher). and then use the initial N bits as the "ID"; when sending the txn, send in encoded form and don't send that part. Now the bandwidth used sending ID's at least counts towards sending the transactions. (I describe this on the network block coding article from a while back)



If one really must use IDs smaller than 160 bits, the ID could be seeded by random value agreed between peers (best for security) to prevent collisions, but it would break multiparty reconstruction, so a recent block hash could be used instead (boo, still has collisions; but harder to compute).



Personally I don't think the small constant factor savings from smaller IDs is worth the software complexity, especially the introduction of infrequently tested error conditions... the big gain here comes from getting rid of the quadratic behavior. If that is successful the bandwidth should be dominated by the size of the actual transactions themselves, and so even big improvements in the rest wouldn't matter much.



Quote from: jl777 on February 26, 2016, 04:17:04 AM A way to avoid the quadratic behavior is for each node to create a network topology map and have a mechanism to route the message to the likely nodes along the likely paths.

But then you can place misbehaving nodes in the minimum cut of those graphs (or DOS the nodes in those positions)-- that is part of what I meant by efficient gossip being in some conflict with attack resistance.



Quote With a combined push protocol on top of a streamlined pull to fill in the gaps, I think the behavior would reduce from quadratic to something a lot more scalable.



Random walks in random graphs tend to fill pretty efficiently, even with low fan-out. I believe that simple trickling behavior combined with set reconciliation to escape cycles kills the quadratic behavior and can do so without introducing any new vulnerabilities or major slowdowns.

We care a lot about performance in the adversarial setting. It is very cheap to compute 64-bit collisions. So a trouble maker could cheaply break that scheme, causing it to always fall back... there are ways around that. ... or even better improvement possible, I can tell you how to reduce the "ID" overhead to _zero_ for at least the first time that ID is sent: (and if the set reconciliation was perfectly efficient, there really only would be a 'first' time)Instead of using transaction hashes as identifiers, pass the transaction through a cryptographic permutation (e.g. a large block blockcipher). and then use the initial N bits as the "ID"; when sending the txn, send in encoded form and don't send that part. Now the bandwidth used sending ID's at least counts towards sending the transactions. (I describe this on the network block coding article from a while back)If one really must use IDs smaller than 160 bits, the ID could be seeded by random value agreed between peers (best for security) to prevent collisions, but it would break multiparty reconstruction, so a recent block hash could be used instead (boo, still has collisions; but harder to compute).Personally I don't think the small constant factor savings from smaller IDs is worth the software complexity, especially the introduction of infrequently tested error conditions... the big gain here comes from getting rid of the quadratic behavior. If that is successful the bandwidth should be dominated by the size of the actual transactions themselves, and so even big improvements in the rest wouldn't matter much.But then you can place misbehaving nodes in the minimum cut of those graphs (or DOS the nodes in those positions)-- that is part of what I meant by efficient gossip being in some conflict with attack resistance.Random walks in random graphs tend to fill pretty efficiently, even with low fan-out. I believe that simple trickling behavior combined with set reconciliation to escape cycles kills the quadratic behavior and can do so without introducing any new vulnerabilities or major slowdowns.

jtoomim



Offline



Activity: 818

Merit: 1001







Hero MemberActivity: 818Merit: 1001 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 05:28:57 AM #7



Quote from: gmaxwell on February 26, 2016, 02:48:54 AM The protocol uses an 'efficient' INV scheme --- but this scheme only reduces the bandwidth by a constant factor: For every transaction a 36 byte long INV is sent (or received) from _every_ peer, and the full transaction is only sent once. Since the INV is a significant fraction of the transaction size (e.g. 36 bytes vs 300) this means that every 10 or so peers uses just as much bandwidth as sending the full transaction every time would take.



Actually, the INV scheme is not very efficient at all. Once you take into account Bitcoin, TCP (including ACKs), IP, and ethernet overheads, each INV takes 193 bytes. That's 127 bytes for the INV message and 66 bytes for the ACK. All of this is for 32 bytes of payload, or an "efficiency" of 16.5% (i.e. 83.5% overhead). For a 400 byte transaction with 20 peers, this can result in 3860 bytes sent in INVs for only 400 bytes of actual data.



An improvement that I've been thinking about implementing (after Blocktorrent) is an option for batched INVs. Including the hashes for two txes per IP packet instead of one would increase the INV size to 229 bytes for 64 bytes of payload -- that is, you add 36 bytes to the packet for every 32 bytes of actual payload. This is a marginal efficiency of 88.8% for each hash after the first. This is *much* better.



Waiting a short period of time to accumulate several hashes together and send them as a batched INV could easily reduce the traffic of running bitcoin nodes by a factor of 2, and possibly even more than that. However, such a technique would slow down the propagation of transactions across the bitcoin network, which might make some people unhappy. The ill effects could likely be mitigated by choosing a different batch size for each peer based on each peer's preferences. Each node could choose one or two peers to which they send INVs in batches of one or two, four more peers in which they send batches of two to four, and the rest in batches of four to eight, for example.



Anyway, I don't actually read Bitcointalk very much, so I probably won't notice if any of you reply. I'm going to post this message on bitcoin-dev as well. Please reply there if you have comments. Blocks-only mode is a great new feature, and will be useful for many people; thanks to everyone who worked on it.Actually, the INV scheme is not very efficient at all. Once you take into account Bitcoin, TCP (including ACKs), IP, and ethernet overheads, each INV takes 193 bytes. That's 127 bytes for the INV message and 66 bytes for the ACK. All of this is for 32 bytes of payload, or an "efficiency" of 16.5% (i.e. 83.5% overhead). For a 400 byte transaction with 20 peers, this can result in 3860 bytes sent in INVs for only 400 bytes of actual data.An improvement that I've been thinking about implementing (after Blocktorrent) is an option for batched INVs. Including the hashes for two txes per IP packet instead of one would increase the INV size to 229 bytes for 64 bytes of payload -- that is, you add 36 bytes to the packet for every 32 bytes of actual payload. This is a marginal efficiency of 88.8% for each hash after the first. This is *much* better.Waiting a short period of time to accumulate several hashes together and send them as a batched INV could easily reduce the traffic of running bitcoin nodes by a factor of 2, and possibly even more than that. However, such a technique would slow down the propagation of transactions across the bitcoin network, which might make some people unhappy. The ill effects could likely be mitigated by choosing a different batch size for each peer based on each peer's preferences. Each node could choose one or two peers to which they send INVs in batches of one or two, four more peers in which they send batches of two to four, and the rest in batches of four to eight, for example.Anyway, I don't actually read Bitcointalk very much, so I probably won't notice if any of you reply. I'm going to post this message on bitcoin-dev as well. Please reply there if you have comments. $65 to $80/kW/month on clean, cheap hydro power.

http://Toom.im Hosting bitcoin miners for/kW/month on clean, cheap hydro power.

gmaxwell

Legendary





Offline



Activity: 3178

Merit: 4298









ModeratorLegendaryActivity: 3178Merit: 4298 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 05:39:52 AM

Last edit: February 26, 2016, 05:52:30 AM by gmaxwell #8 Quote from: jtoomim on February 26, 2016, 05:28:57 AM An improvement that I've been thinking about implementing (after Blocktorrent) is an option for batched INVs. Including the hashes for two txes per IP packet instead of one would increase the INV size to 229 bytes for 64 bytes of payload -- that is, you add 36 bytes to the packet for every 32 bytes of actual payload. This is a marginal efficiency of 88.8% for each hash after the first. This is *much* better.



Waiting a short period of time to accumulate several hashes together and send them as a batched INV could easily reduce the traffic of running bitcoin nodes by a factor of 2, and

Uh. Bitcoin has done this since the very early days. The batching was temporarily somewhat hobbled between 0.10 and 0.12 (especially when you had any abusive frequently pinging peers attached), but is now fully functional again and it now manages to batch many transactions per INV pretty effectively. Turn on net message debugging and you'll see the many INVs that are much larger than the minimum. The average batching size (ignoring the trickle cut-through) is about 5 seconds long-- and usually gets about 10 transactions per INV. My measurements were with these fixes in effect; I expect the blocksonly savings would have been higher otherwise.





2016-02-26 05:47:08 sending: inv (1261 bytes) peer=33900

2016-02-26 05:47:08 sending: inv (109 bytes) peer=32460

2016-02-26 05:47:08 sending: inv (37 bytes) peer=34501

2016-02-26 05:47:08 sending: inv (217 bytes) peer=33897

2016-02-26 05:47:08 sending: inv (145 bytes) peer=41863

2016-02-26 05:47:08 sending: inv (37 bytes) peer=35725

2016-02-26 05:47:08 sending: inv (73 bytes) peer=20567

2016-02-26 05:47:08 sending: inv (289 bytes) peer=44703

2016-02-26 05:47:08 sending: inv (73 bytes) peer=13408

2016-02-26 05:47:09 sending: inv (649 bytes) peer=41279

2016-02-26 05:47:09 sending: inv (145 bytes) peer=42612

2016-02-26 05:47:09 sending: inv (325 bytes) peer=34525

2016-02-26 05:47:09 sending: inv (181 bytes) peer=41174

2016-02-26 05:47:09 sending: inv (469 bytes) peer=41460

2016-02-26 05:47:10 sending: inv (973 bytes) peer=133

2016-02-26 05:47:10 sending: inv (361 bytes) peer=20541





Twiddling here doesn't change the asymptotic efficiency though; which is what my post is about.



[I'm also somewhat surprised that you were unaware of this; one of the patches "classic" was talking about patching out was the one restoring the batching... due to a transaction deanonymization service (or troll) claiming it interfered with their operations.] Uh. Bitcoin has done this since the very early days. The batching was temporarily somewhat hobbled between 0.10 and 0.12 (especially when you had any abusive frequently pinging peers attached), but is now fully functional again and it now manages to batch many transactions per INV pretty effectively. Turn on net message debugging and you'll see the many INVs that are much larger than the minimum. The average batching size (ignoring the trickle cut-through) is about 5 seconds long-- and usually gets about 10 transactions per INV. My measurements were with these fixes in effect; I expect the blocksonly savings would have been higher otherwise.Twiddling here doesn't change the asymptotic efficiency though; which is what my post is about.[I'm also somewhat surprised that you were unaware of this; one of the patches "classic" was talking about patching out was the one restoring the batching... due to a transaction deanonymization service (or troll) claiming it interfered with their operations.]

sickpig



Offline



Activity: 1260

Merit: 1007







LegendaryActivity: 1260Merit: 1007 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 08:51:32 AM

Last edit: February 26, 2016, 09:28:44 AM by sickpig #10 Quote from: gmaxwell on February 26, 2016, 02:48:54 AM How much less bandwidth does blocksonly use in practice? I recently measured this using two techniques: Once by instrumenting a node to measure bandwidth used for blocks vs all other traffic, and again by repeatedly running in both modes for a day and monitoring the hosts total network usage; both modes gave effectively the same result.



How much is the savings? Blocksonly reduced the node's bandwidth usage by 88%.



Do you care to share the raw data? I would like to independently verify your claim.



edit: the patch to introduced code changes to measure different type of bandwidth would be even better. Do you care to share the raw data? I would like to independently verify your claim.edit: the patch to introduced code changes to measure different type of bandwidth would be even better. Bitcoin is a participatory system which ought to respect the right of self determinism of all of its users - Gregory Maxwell.

TierNolan



Offline



Activity: 1232

Merit: 1006







LegendaryActivity: 1232Merit: 1006 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 11:42:24 AM #11 Quote from: sickpig on February 26, 2016, 08:51:32 AM Do you care to share the raw data? I would like to independently verify your claim.



It should be moderate to do a back of the envelop calculation.



If a node has 24 peers, then it will send/receive 24 invs. That is 36 * 24 = 864 bytes. It also has to receive the transaction at, say 400 bytes.



It will also receive the transaction in the full block for another 400 bytes (invs being negligible).



With blocks only, it is only receives 400 bytes. That gives an efficiency of 400 / (400 + 864) = 0.31.



If inv messages were only 33% efficient (due to headers etc), then that gives 400 / (400 + 864 * 3) = 0.13.



That is pretty close to the stated number.



I wonder if simple Bloom filters would be the way to go (rather than IBLT). The sender sends a Bloom filter (sender_filter) containing all transactions that it added to its memory pool since the last (5 minute) broadcast. The receiver creates an empty Bloom filter (receiver_filter) with the same parameters. It finds the transactions in its memory pool that match sender_filter and adds them to receiver_filter. If both filters end up matching, then it assumes that it has all the transactions.



If not, it XORs the two filters (=rle_filter) and sends the result as a reply. Run length encoding could be used, since the XOR should be mostly strings of zeros.



The sending node generates receiver_filter using (sender_filter XOR rle_filter). It sends on any transactions that were in sender_filter that aren't in receiver_filter. It should be moderate to do a back of the envelop calculation.If a node has 24 peers, then it will send/receive 24 invs. That is 36 * 24 = 864 bytes. It also has to receive the transaction at, say 400 bytes.It will also receive the transaction in the full block for another 400 bytes (invs being negligible).With blocks only, it is only receives 400 bytes. That gives an efficiency of 400 / (400 + 864) = 0.31.If inv messages were only 33% efficient (due to headers etc), then that gives 400 / (400 + 864 * 3) = 0.13.That is pretty close to the stated number.I wonder if simple Bloom filters would be the way to go (rather than IBLT). The sender sends a Bloom filter (sender_filter) containing all transactions that it added to its memory pool since the last (5 minute) broadcast. The receiver creates an empty Bloom filter (receiver_filter) with the same parameters. It finds the transactions in its memory pool that match sender_filter and adds them to receiver_filter. If both filters end up matching, then it assumes that it has all the transactions.If not, it XORs the two filters (=rle_filter) and sends the result as a reply. Run length encoding could be used, since the XOR should be mostly strings of zeros.The sending node generates receiver_filter using (sender_filter XOR rle_filter). It sends on any transactions that were in sender_filter that aren't in receiver_filter. 1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF

belcher



Offline



Activity: 261

Merit: 328







Sr. MemberActivity: 261Merit: 328 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 03:30:50 PM #13 Quote from: TooDumbForBitcoin on February 26, 2016, 02:47:53 PM Quote from: gmaxwell on February 26, 2016, 05:39:52 AM

...nice technical stuff....



[I'm also somewhat surprised that you were unaware of this; one of the patches "classic" was talking about patching out was the one restoring the batching... due to a transaction deanonymization service (or troll) claiming it interfered with their operations.]



Nice technical post, but why the drama at the end?

Nice technical post, but why the drama at the end?

I'd say that ending of drama will come from solving the underlying disagreements, not by just one side keeping quiet.



Efficiency improvements like blocksonly can help more of the bitcoin economy adopt full nodes. It can help stop any move by miners that attempt to force the real economy to follow a previously-invalid blockchain. The only reason Bitcoin Classic is even a danger is because mining is relatively centralized and much of the bitcoin economy uses SPV-security wallets that completely trusts the miners and would simply go along with almost any hardfork.



With this in mind, you can see it's not possible to separate the technical discussion from the political fight over how bitcoin should scale.



Aside: I just went to search for that PR to bitcoin classic but couldn't find it. Has it been deleted? I'd say that ending of drama will come from solving the underlying disagreements, not by just one side keeping quiet.Efficiency improvements like blocksonly can help more of the bitcoin economy adopt full nodes. It can help stop any move by miners that attempt to force the real economy to follow a previously-invalid blockchain. The only reason Bitcoin Classic is even a danger is because mining is relatively centralized and much of the bitcoin economy uses SPV-security wallets that completely trusts the miners and would simply go along with almost any hardfork.With this in mind, you can see it's not possible to separate the technical discussion from the political fight over how bitcoin should scale.Aside: I just went to search for that PR to bitcoin classic but couldn't find it. Has it been deleted?

- CoinJoin that people will actually use.

PGP fingerprint: 0A8B 038F 5E10 CC27 89BF CFFF EF73 4EA6 77F3 1129 1HZBd22eQLgbwxjwbCtSjhoPFWxQg8rBd9 JoinMarket - CoinJoin that people will actually use.PGP fingerprint: 0A8B 038F 5E10 CC27 89BF CFFF EF73 4EA6 77F3 1129

TierNolan



Offline



Activity: 1232

Merit: 1006







LegendaryActivity: 1232Merit: 1006 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 04:47:10 PM #14



Version message sets the relay flag to zero

Prints a warning if an inv message is received (and ignores the inv)

Prints a warning if a tx message is received (and ignores the tx)

The warnings are disabled for whitelisted peers if the whitelistrelay flag is set.



It doesn't count as misbehavior error though. A peer could send lots of transactions and inv messages and not get banned.



Transactions that are received from whitelisted peers are still relayed fully. Having a look at the code, the effects of the flag are as follows:The warnings are disabled for whitelisted peers if the whitelistrelay flag is set.It doesn't count as misbehavior error though. A peer could send lots of transactions and inv messages and not get banned.Transactions that are received from whitelisted peers are still relayed fully. 1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF

trout



Offline



Activity: 334

Merit: 250







Sr. MemberActivity: 334Merit: 250 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 05:30:31 PM

Last edit: February 26, 2016, 05:44:48 PM by trout #15 since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute

minimum of transaction sends you'd need. Am I missing something trivial here?



As far as I understand, a peer will send every transaction to everyone he hasn't received it from. So this results in an additional multiplicative factor of something like k/2, where k is the average number of neighbours for a node. (Or is it much more than that?)



Is it this factor that you'd like to get rid of, gmaxwell?





gmaxwell

Legendary





Offline



Activity: 3178

Merit: 4298









ModeratorLegendaryActivity: 3178Merit: 4298 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 05:47:29 PM #16 Quote from: TierNolan on February 26, 2016, 04:47:10 PM The warnings are disabled for whitelisted peers if the whitelistrelay flag is set.

Right. Unfortunately, Bitcoinj sends unsolicited transactions. The developers of BTCD have been complaining about that for a while, and I'd like to stop supporting unsolicited txn... but while the behavior is common we didn't want to add misbehavior for it.



Quote from: sickpig on February 26, 2016, 08:51:32 AM Do you care to share the raw data? I would like to independently verify your claim.



I have data from the confirmation test run,



RX packets 169821144 bytes 83997523200 (78.2 GiB)

RX errors 0 dropped 5 overruns 0 frame 0

TX packets 269163479 bytes 293982867649 (273.7 GiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

RX packets 171751847 bytes 84478658144 (78.6 GiB)

RX errors 0 dropped 5 overruns 0 frame 0

TX packets 271892507 bytes 296796582415 (276.4 GiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

and

RX packets 175622574 bytes 85220192328 (79.3 GiB)

RX errors 0 dropped 5 overruns 0 frame 0

TX packets 277631356 bytes 303320529705 (282.4 GiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0

RX packets 175824690 bytes 85283702303 (79.4 GiB)

RX errors 0 dropped 5 overruns 0 frame 0

TX packets 277920978 bytes 303653216190 (282.7 GiB)

TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0



These were 6 hour runs, with the node pre-primed for an hour before the ran began.



As tiernolan points out, the results aren't surprising-- if you go look in my reddit comments you'll see I computed similar figures for this from first principles before measuring.



Quote from: belcher on February 26, 2016, 03:30:50 PM Quote from: TooDumbForBitcoin on February 26, 2016, 02:47:53 PM Nice technical post, but why the drama at the end?

[...]

Aside: I just went to search for that PR to bitcoin classic but couldn't find it. Has it been deleted?

[...]Aside: I just went to search for that PR to bitcoin classic but couldn't find it. Has it been deleted?

Took me a bit to find it:



Quote from: trout on February 26, 2016, 05:30:31 PM since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute

minimum of transaction sends you'd need. Am I missing something trivial here?

You are!



Imagine if the network were wired such that if a message was relayed along connection 0 on each peer it would form a Hamiltonian cycle and reach every node in the network. Thus, each peer would receive and send each transaction exactly once. This shows the bound cannot be transactions*peers.



Now, this is obviously pretty fragile-- and our network doesn't form a nice complete cycle. So if you want to check if your peers actually has all the transactions you have, does that reintroduce transactions*peers(*ratio of txid to tx size) traffic? No. Set reconciliation lets you communicate a set difference with size proportional to the difference, see my EL5 description of a computationally efficient set reconciliation scheme: https://www.reddit.com/r/btc/comments/43iup7/mike_hearn_implemented_a_test_version_of_thin/czirz2p



The difference is, hopefully, almost zero size-- so the communication is small even if the number of transactions are large.



Quote As far as I understand, a peer will send every transaction to everyone he hasn't received it from. So this results in an additional multiplicative factor of something like k/2, where k is the average number of neighbours for a node.



Is it this factor that you'd like to get rid of, gmaxwell?

Yes, it is. Technically it sends an INV (a message with TXids) not the whole transaction, though this is only a constant factor improvement.

Right. Unfortunately, Bitcoinj sends unsolicited transactions. The developers of BTCD have been complaining about that for a while, and I'd like to stop supporting unsolicited txn... but while the behavior is common we didn't want to add misbehavior for it.I have data from the confirmation test run,RX packets 169821144 bytes 83997523200 (78.2 GiB)RX errors 0 dropped 5 overruns 0 frame 0TX packets 269163479 bytes 293982867649 (273.7 GiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0RX packets 171751847 bytes 84478658144 (78.6 GiB)RX errors 0 dropped 5 overruns 0 frame 0TX packets 271892507 bytes 296796582415 (276.4 GiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0andRX packets 175622574 bytes 85220192328 (79.3 GiB)RX errors 0 dropped 5 overruns 0 frame 0TX packets 277631356 bytes 303320529705 (282.4 GiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0RX packets 175824690 bytes 85283702303 (79.4 GiB)RX errors 0 dropped 5 overruns 0 frame 0TX packets 277920978 bytes 303653216190 (282.7 GiB)TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0These were 6 hour runs, with the node pre-primed for an hour before the ran began.As tiernolan points out, the results aren't surprising-- if you go look in my reddit comments you'll see I computed similar figures for this from first principles before measuring.Took me a bit to find it: https://github.com/bitcoinclassic/bitcoinclassic/pull/16 I'm not trying to stir drama, I was earnestly surprised as jtoomim commented extensively there.You are!Imagine if the network were wired such that if a message was relayed along connection 0 on each peer it would form a Hamiltonian cycle and reach every node in the network. Thus, each peer would receive and send each transaction exactly once. This shows the bound cannot be transactions*peers.Now, this is obviously pretty fragile-- and our network doesn't form a nice complete cycle. So if you want to check if your peers actually has all the transactions you have, does that reintroduce transactions*peers(*ratio of txid to tx size) traffic? No. Set reconciliation lets you communicate a set difference with size proportional to the difference, see my EL5 description of a computationally efficient set reconciliation scheme: https://www.reddit.com/r/btc/comments/43iup7/mike_hearn_implemented_a_test_version_of_thin/czirz2pThe difference is, hopefully, almost zero size-- so the communication is small even if the number of transactions are large.Yes, it is. Technically it sends an INV (a message with TXids) not the whole transaction, though this is only a constant factor improvement.

trout



Offline



Activity: 334

Merit: 250







Sr. MemberActivity: 334Merit: 250 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 07:14:26 PM #17 Quote from: gmaxwell on February 26, 2016, 05:47:29 PM

Quote from: trout on February 26, 2016, 05:30:31 PM since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute

minimum of transaction sends you'd need. Am I missing something trivial here?

You are!



Imagine if the network were wired such that if a message was relayed along connection 0 on each peer it would form a Hamiltonian cycle and reach every node in the network. Thus, each peer would receive and send each transaction exactly once. This shows the bound cannot be transactions*peers.

You are!Imagine if the network were wired such that if a message was relayed along connection 0 on each peer it would form a Hamiltonian cycle and reach every node in the network. Thus, each peer would receive and send each transaction exactly once. This shows the bound cannot be transactions*peers.

is sent Peers times from someone to someone (which is what I was talking about), though indeed each node only sends it once.



Anyway, this settled, are you saying that currently (without set reconciliation) each node sends each transaction roughly Peers times? (Where Peers is the size of the whole network - not just the number of

neighbours). That's ... pretty bad indeed.

OK I see I was just talking about a different quantity. In your ideal example, each transactionis sent Peers times from someone to someone (which is what I was talking about), though indeed each node only sends it once.Anyway, this settled, are you saying that currently (without set reconciliation) each node sends each transaction roughly Peers times? (Where Peers is the size of the whole network - not just the number ofneighbours). That's ... pretty bad indeed.

TierNolan



Offline



Activity: 1232

Merit: 1006







LegendaryActivity: 1232Merit: 1006 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 26, 2016, 08:27:04 PM #18 Quote from: trout on February 26, 2016, 05:30:31 PM since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute

minimum of transaction sends you'd need. Am I missing something trivial here?



On average each node should receive each transaction once and send it onward once. Better connected nodes would tend to have more peers who haven't seen the transaction and so request it.



That is where the suggestion to pick 2 nodes to forward it to comes from. If each node immediately sends it to 2 others, pretty soon the whole network knows about the transaction. It doesn't matter how many you are connected to, you only really need to forward it to 2.



The more efficient method could be used later to catch detect peers who didn't get it. On average each node should receive each transaction once and send it onward once. Better connected nodes would tend to have more peers who haven't seen the transaction and so request it.That is where the suggestion to pick 2 nodes to forward it to comes from. If each node immediately sends it to 2 others, pretty soon the whole network knows about the transaction. It doesn't matter how many you are connected to, you only really need to forward it to 2.The more efficient method could be used later to catch detect peers who didn't get it. 1LxbG5cKXzTwZg9mjL3gaRE835uNQEteWF

jl777



Offline



Activity: 1176

Merit: 1090







LegendaryActivity: 1176Merit: 1090 Re: Blocksonly mode BW savings, the limits of efficient block xfer, and better relay February 27, 2016, 12:17:33 AM #19 Quote from: TierNolan on February 26, 2016, 08:27:04 PM Quote from: trout on February 26, 2016, 05:30:31 PM since every peer has to receive every transactions eventually, it seems that Transactions * Peers is the absolute

minimum of transaction sends you'd need. Am I missing something trivial here?



On average each node should receive each transaction once and send it onward once. Better connected nodes would tend to have more peers who haven't seen the transaction and so request it.



That is where the suggestion to pick 2 nodes to forward it to comes from. If each node immediately sends it to 2 others, pretty soon the whole network knows about the transaction. It doesn't matter how many you are connected to, you only really need to forward it to 2.



The more efficient method could be used later to catch detect peers who didn't get it.

On average each node should receive each transaction once and send it onward once. Better connected nodes would tend to have more peers who haven't seen the transaction and so request it.That is where the suggestion to pick 2 nodes to forward it to comes from. If each node immediately sends it to 2 others, pretty soon the whole network knows about the transaction. It doesn't matter how many you are connected to, you only really need to forward it to 2.The more efficient method could be used later to catch detect peers who didn't get it.



If a bit of privacy is sacrificed and it is started with sending to 16 peers, I think that would reduce things by 3 hops or so. probably not worth doing. But it would dramatically reduce the chances of a propagation "fizzling" out. That could be avoided by retransmitting to a pair of two other peers if the tx is not seen in the network after 15 seconds



James with a fanout of 2 and 6000 nodes, wouldnt it be 10 to 12 hops on average? assuming average ping time of 300 millis, plus latency of ?, I guess it is ~5 seconds. pretty good.If a bit of privacy is sacrificed and it is started with sending to 16 peers, I think that would reduce things by 3 hops or so. probably not worth doing. But it would dramatically reduce the chances of a propagation "fizzling" out. That could be avoided by retransmitting to a pair of two other peers if the tx is not seen in the network after 15 secondsJames http://www.digitalcatallaxy.com/report2015.html

100+ page annual report for SuperNET 100+ page annual report for SuperNET