Andreas Schildbach

Hero Member



Offline



Activity: 483

Merit: 500







ModeratorHero MemberActivity: 483Merit: 500 Next Steps and Testers wanted February 06, 2013, 12:03:27 PM

Last edit: February 12, 2013, 07:48:19 AM by Goonie #1



I'm working on the next version of Bitcoin Wallet which will not have a lot of user-facing features but rather focusses on much more efficient network usage.



For that to happen, Matt Corallo and Mike Hearn have implemented Bloom Filters into the soon to be released bitcoin-qt 0.8 and bitcoinj (will go into 0.7).



With Bloom Filters, only the data relevant to your wallet will be transferred, plus some more for anonymity purposes. This saves network, CPU resources and at the end of the day your battery will last longer.



You can help testing this. Get a preview from:



http://code.google.com/p/bitcoin-wallet/downloads/list



Use the trusted peer prefs for connecting to a Bloom enabled node. Since Bitcoin 0.8 is not released yet you either need to compile and run your own node or you can use Mike's riker.plan99.net.



I'd be especially interested in comments from people who suffered from high usage of their network data plan.

I thought I'd give you a quick update on the progress lately.I'm working on the next version of Bitcoin Wallet which will not have a lot of user-facing features but rather focusses on much more efficient network usage.For that to happen, Matt Corallo and Mike Hearn have implemented Bloom Filters into the soon to be released bitcoin-qt 0.8 and bitcoinj (will go into 0.7).With Bloom Filters, only the data relevant to your wallet will be transferred, plus some more for anonymity purposes. This saves network, CPU resources and at the end of the day your battery will last longer.You can help testing this. Get a preview from:Use the trusted peer prefs for connecting to a Bloom enabled node. Since Bitcoin 0.8 is not released yet you either need to compile and run your own node or you can use Mike's riker.plan99.net.I'd be especially interested in comments from people who suffered from high usage of their network data plan.

Mike Hearn



Offline



Activity: 1526

Merit: 1008







LegendaryActivity: 1526Merit: 1008 Re: Next Steps and Testers wanted February 06, 2013, 04:41:30 PM #4



Currently SPV (lightweight p2p) clients must download all transactions from the entire Bitcoin system, even though they do not verify them or check signatures. Instead they're downloaded, checked to see if they send money to or from your wallet, and if not the data is just thrown away.



This is wasteful, slow and doesn't scale because it means as more people use Bitcoin syncing the chain gets ever slower. Also, it pushes people towards clients like Electrum and BitcoinSpinner which use custom protocols which do all the work on dedicated servers.



Bloom filtering solves this. When it connects, the client creates a data structure (the filter) that represents the keys in the users wallet and sends it to the remote peers. They then only send the transactions that match the filter across the network.



The system we have chosen has desirable properties:



When transactions are sent to the client, they are accompanied by proofs that they were included in the block chain (merkle branches). Remote nodes cannot send you fake money unless they are willing to mine fake chains, which is hard. (note: they can try and omit transactions, but because you can use any Bitcoin node they are unlikely to get away with it)

transactions, but because you can use any Bitcoin node they are unlikely to get away with it) The Bloom filters are probabilistic. They can have false positives, but no false negatives. That means the remote peer receives a noisy view of your wallet - it'll end up sending you transactions that aren't actually involving your keys (which get thrown away of course) and can't tell which are really yours and which are not. You can choose the false positive rate to trade off bandwidth vs privacy.

In this way, we can take a step closer to having Electrum/BCSpinner levels of performance but with strong privacy, no central points of failure and using Satoshis original vision of a purely peer to peer network. Here is a 10 second explanation.Currently SPV (lightweight p2p) clients must download all transactions from the entire Bitcoin system, even though they do not verify them or check signatures. Instead they're downloaded, checked to see if they send money to or from your wallet, and if not the data is just thrown away.This is wasteful, slow and doesn't scale because it means as more people use Bitcoin syncing the chain gets ever slower. Also, it pushes people towards clients like Electrum and BitcoinSpinner which use custom protocols which do all the work on dedicated servers.Bloom filtering solves this. When it connects, the client creates a data structure (the filter) that represents the keys in the users wallet and sends it to the remote peers. They then only send the transactions that match the filter across the network.The system we have chosen has desirable properties:In this way, we can take a step closer to having Electrum/BCSpinner levels of performance but with strong privacy, no central points of failure and using Satoshis original vision of a purely peer to peer network.

slothbag



Offline



Activity: 369

Merit: 250









Sr. MemberActivity: 369Merit: 250 Re: Next Steps and Testers wanted February 07, 2013, 05:55:22 AM #8



Downloading bootstrap again Just upgraded from one of the 0.8 turbo builds.. Renamed database folders, syncing from very start again.. Argh, last time I went through this it was a week long process.Downloading bootstrap again

cardinalG



Offline



Activity: 56

Merit: 0







NewbieActivity: 56Merit: 0 Re: Next Steps and Testers wanted February 07, 2013, 06:33:55 AM #9 Quote from: Mike Hearn on February 06, 2013, 04:41:30 PM 10 second explanation.



Currently SPV (lightweight p2p) clients must download all transactions from the entire Bitcoin system, even though they do not verify them or check signatures. Instead they're downloaded, checked to see if they send money to or from your wallet, and if not the data is just thrown away.



This is wasteful, slow and doesn't scale because it means as more people use Bitcoin syncing the chain gets ever slower. Also, it pushes people towards clients like Electrum and BitcoinSpinner which use custom protocols which do all the work on dedicated servers.



Bloom filtering solves this. When it connects, the client creates a data structure (the filter) that represents the keys in the users wallet and sends it to the remote peers. They then only send the transactions that match the filter across the network.



The system we have chosen has desirable properties:



When transactions are sent to the client, they are accompanied by proofs that they were included in the block chain (merkle branches). Remote nodes cannot send you fake money unless they are willing to mine fake chains, which is hard. (note: they can try and omit transactions, but because you can use any Bitcoin node they are unlikely to get away with it)

transactions, but because you can use any Bitcoin node they are unlikely to get away with it) The Bloom filters are probabilistic. They can have false positives, but no false negatives. That means the remote peer receives a noisy view of your wallet - it'll end up sending you transactions that aren't actually involving your keys (which get thrown away of course) and can't tell which are really yours and which are not. You can choose the false positive rate to trade off bandwidth vs privacy.

In this way, we can take a step closer to having Electrum/BCSpinner levels of performance but with strong privacy, no central points of failure and using Satoshis original vision of a purely peer to peer network.

Here is aexplanation.Currently SPV (lightweight p2p) clients must download all transactions from the entire Bitcoin system, even though they do not verify them or check signatures. Instead they're downloaded, checked to see if they send money to or from your wallet, and if not the data is just thrown away.This is wasteful, slow and doesn't scale because it means as more people use Bitcoin syncing the chain gets ever slower. Also, it pushes people towards clients like Electrum and BitcoinSpinner which use custom protocols which do all the work on dedicated servers.Bloom filtering solves this. When it connects, the client creates a data structure (the filter) that represents the keys in the users wallet and sends it to the remote peers. They then only send the transactions that match the filter across the network.The system we have chosen has desirable properties:In this way, we can take a step closer to having Electrum/BCSpinner levels of performance but with strong privacy, no central points of failure and using Satoshis original vision of a purely peer to peer network.



Andreas Schildbach

Hero Member



Offline



Activity: 483

Merit: 500







ModeratorHero MemberActivity: 483Merit: 500 Re: Next Steps and Testers wanted February 08, 2013, 08:07:59 PM #11 Quote from: stan.distortion on February 08, 2013, 06:13:36 PM Have port 8333 open now, 0.8 seems to be causing some network lag in other apps (noticeable in online games) not had a look what's going on yet, maybe its just the extra connections.



You can address that to the core bitcoin team. Either open a ticket on their bugtracker or post in



Quote Something strange, when I put my desktop internal ip as a trusted peer I just have that 1 connection. If I shut down bitcoin-qt and re-open it later Bitcoin Wallet doesn't seem to re-connect to it, sync is stalled on 1 hour atm. Don't think its an issue with my wifi, its been weak but constant the whole time.



Thanks for catching this.



Well, trusted peers are not sticky like any other peer. So if they cannot be connected to or disconnect, they'll be thrown away. You can deliberately disconnect (see action bar overflow menu) and reconnect by opening the app again.



I'll open a ticket for bitcoinj so addresses can be marked as sticky.

You can address that to the core bitcoin team. Either open a ticket on their bugtracker or post in the forum Thanks for catching this.Well, trusted peers are not sticky like any other peer. So if they cannot be connected to or disconnect, they'll be thrown away. You can deliberately disconnect (see action bar overflow menu) and reconnect by opening the app again.I'll open a ticket for bitcoinj so addresses can be marked as sticky.

Mike Hearn



Offline



Activity: 1526

Merit: 1008







LegendaryActivity: 1526Merit: 1008 Re: Next Steps and Testers wanted February 09, 2013, 10:18:22 AM #13 If your Bitcoin-Qt is still syncing then it's expected to put heavy load on the system. Generally Bitcoin is an intensive app and will only get moreso. Running it along side performance sensitive apps like games is a recipe for problems.

Andreas Schildbach

Hero Member



Offline



Activity: 483

Merit: 500







ModeratorHero MemberActivity: 483Merit: 500 Re: Next Steps and Testers wanted February 13, 2013, 05:51:16 PM #18



Write ahead caching might be a solution. This can write blocks in larger chunks and skip most of the seeking to the start and writing the hash of the chain head all the time.



However, this would introduce the risk of the chain getting out of sync with the wallet(s). Thus, after getting advice from Mike I postponed these optimization until bitcoinj can recover from drift without replaying the whole blockchain.





Quote from: jim618 on February 12, 2013, 08:50:58 PM Hi Andreas,

I was wondering what sync speeds you were seeing with bloom filters ?



If I download 1 months / 3 months of blocks I am getting about 80 blocks/ second on 3G and 130 blocks/ second on WiFi (1MB down, 100kB up). This is from riker.plan99.net



Android Wallet and MultiBit should be getting about the same I think but it would be nice to have the numbers to confirm it.

My Nexus Galaxy appears to be I/O-limited, maxing out at about 40 blocks/second even with just getheaders (no bloom filtering involved) and on speedy & stable WLAN. My g-slate can do up to 80 blocks/second.Write ahead caching might be a solution. This can write blocks in larger chunks and skip most of the seeking to the start and writing the hash of the chain head all the time.However, this would introduce the risk of the chain getting out of sync with the wallet(s). Thus, after getting advice from Mike I postponed these optimization until bitcoinj can recover from drift without replaying the whole blockchain.