We are continuing to progress towards our goal of releasing a public testnet with Mutable Data. The internal testnet we ran last week showed a few issues and we’ve been working on fixing these issues during the last few days. We had hoped to release the public testnet this week, but these bugs would mean we would have to take development time out to support issues we know of and then soak test various possible attack vectors that might be applicable for client-only test networks. Patches for a few of these are already in place, with more to get merged soon to master and tested thoroughly to confirm they satisfy the spam prevention requirements. We realise everyone is very keen that we release the new testnet and then Alpha 2 as soon as possible and we will.

Some of these issues had to do with file synchronisation. Only after multiple apps (each of which is now a different process) were run together with the authenticator did we notice that things put into the common file used by mock-routing/vaults were not properly picked up by other processes, etc. leading to some test failures. These are now resolved. Another point is the (current temporary solution) delete handling which leaves the entries behind, merely blanking out the values. Some parts of code incorrectly assumed that only the absence would mean the entry was not present when in fact we should have treated the presence with a blanked out value similarly. This is sometimes an annoying paradigm which is accentuated as apps are revoked and reauthenticated when in each revocation the previous entries need to be re-encrypted for protection. Since we cannot erase, we need to leave the previous entries blanked out and insert the new ciphertext, making the number of entries and size of the mutable data grow even without any app having done anything. We are soon going to address this by providing a way to actually delete and differentiate the types that don’t allow strict deletion with a different range of type tags. We are dwelling on this though and it is not a confirmed approach yet.

Recruitment

As you will know from previous updates, recruitment is a focus for us. We are all acutely aware of development timescales and doing as much as we can to shorten them. With regard to recruiting network engineers, we have just made an offer to a developer and have several more at various stages in our process, which is a combination of a technical interview, a coding challenge, and a cultural fit interview. We are currently screening through about 20 CVs per week so hopefully we will be able to grow the team with really good quality engineers. We’re also delighted to announce that @joshuef will be joining @Krishna_Kumar’s team from next week. As many will know, Josh was the creator of the SAFE Browser and it is great to have a developer of his calibre on board. We have also hired 2 additional team members for the operations team, an admin/finance manager and an office manager to help support the team as we move forward with launch preparations and increased relationships.

Upcoming meetups

The SAFE Network San Francisco meetup group is being revived by @hunterlester. He is organising a meetup about IoT devices for the SAFE Network on July 11.

We’re grateful to Mozilla SF for graciously allowing us to use their lovely community events room to host this meetup.

Thank you to core Rust engineer, Brian Anderson (@brson), for volunteering to help organise and oversee our event.

This meetup will be heavy on fun experimentation, making, building, and very light, if any, on presentations. The initial focus will be on building IoT devices as inexpensively as possible to interface with the SAFE Network.

The SAFE Network London meetup group is organising a meetup on July 5:

The first Project SAFE gathering of 2017 will combine an update on SAFE protocol, demos and a social gathering. Awaiting venue being confirmed (RISE have move location)

See the event page for more information.

SAFE Authenticator & API

We are continuing to test the APIs and the applications with the actual network. Testing with the actual network has exposed a few gaps in the mock implementation and we are addressing the issues as we continue to test. We did make some great progress in terms of addressing the issues that we had come across and we hope only a few are left to complete. Issues, especially with revocation and reauthorisation, have multiple scenarios to test and we are making steady progress.

While most of the team is working on fixing these issues and testing, @hunterlester has been working on a tool which can serve as a playground for the DOM APIs. Most of the requests from developers were related to the DOM API and we thought of putting together a tool which can help in learning the APIs. @hunterlester has created a small video teaser on how it is shaping up.

The Java API is also catching up with the changes in the FFI APIs and the test cases are being updated. @Kumar is hoping to wrap the APIs soon.

SAFE Client Libs & Vault

After more than half a year of development, the mutable-data branch of SAFE Client Libs has been merged into master, along with utility repositories and crates such as self_encryption . This means that from now on we won’t be maintaining older versions of these crates and we’ll focus only on MutableData, which is upstream now.

This week, a lot of time was spent on debugging and resolving multiple issues in safe_client_libs , including those described in the previous update. First, we’ve found some synchronisation issues with the mock-vault implementation: while it’s been working fine when confined to a single process, the front-end team started to get some weird bugs trying to run several processes using the same vault at once (e.g. the browser with the authenticator and an app). In this case, changes made in one process weren’t available to another. Now we’ve made a change to synchronise a vault state in memory with the file system on each read operation.

Besides that, we’ve fixed multiple errors concerning handling of app revocation (caused by incorrect handling of removed entries), added graceful handling of low balance errors in mock-vault , and fixed incorrect network connection status propagation.

Vaults have seen some improvements in limiting concurrent mutations of a single MutableData instance. Previously, during the edge case of approaching the limit of allowed entries or size, one could send the maximum number of allowed mutations and vaults would have to honour all those requests due the artifact of unknown order of accumulation and eventual consistency and the limit could be slightly exceeded. This wasn’t noted down as a big problem (as there was a limit to the number of mutation requests that could be handled at a time) but it existed and was known. We have changed the paradigm for checks (due to the nature of MutableData which allows for independent entries) by taking into account the data not only in the chunk store but also pending writes in the cache and now it should not be possible to go any bit over the max allowed limit. The change has been applied.

Routing

There was nothing blocking clients to send disallowed RPCs to the network. Now we have put checks to disconnect from any client that tries to send non-allowed RPCs right at the proxy. This will make sure that we reduce the effort the network spends on a misbehaving client and in future PRs we will also try and penalise such an action if detected. To prevent client spamming the network, we have introduced a concept of rate-limiter. We start with defining what is the total bandwidth that can be handled non-disruptively by the network on behalf of the clients (bandwidth usage due to churn/data-relocation etc. are over and above this). These numbers will likely be tweaked based on test results and actual network capability. Then we divide that by the total number of potential proxies we plan the testnet for, thus getting the maximum throughput allowed for clients per proxy. Then, depending on the total number of connected clients, we distribute this bandwidth evenly among them for fair usage policy. The Rate Limiter ensures that no client goes above the maximum throughput it is allowed (throttling). There is an upper cap of 10 clients per proxy.

Next, we only allow one client per IP at the proxy layer, which establishes the physical network connection to clients. This further prevents a malicious client from restarting every time with a new unregistered client and bypass the system. We are also working on a ban for a period if malicious activity is detected, thus the client cannot just restart and login from the same IP frequently. It can however go to another proxy if needed until it exhausts all of them after which it has to compulsorily wait for the ban to expire with at least one of the proxies before it can bootstrap to the network again. Of course there can be an attack if someone chooses to spend resources on getting many different IPs to DDoS everything, but that is not the attack vector we are catering to currently (at least not in the upcoming testnets where the network size is limited to a fraction of what we’d expect it to be when users are also running vaults and contributing to the various network personas).