Summary

Here are some of the main things to highlight this week:

Shared Vault

Please note that today we restarted and cleared all data from our hosted Shared Vault. This was necessary as we were starting to find compatibility issues with some of the data on there and recent updates we have made. Leaving the data intact would have made it difficult to tell whether an issue found was genuine, or due to not being compatible. This means all Shared Vault user accounts and sites have been removed.

Apologies for any inconvenience caused.

We look forward to seeing test sites pop up in due course, but do keep in mind that this is a test vault and there will no doubt be times where we need to wipe it & restart it again. We always do our best to avoid doing this where possible.

You can download the new vault_connection_info.config file from here.

Or if you’re using the CLI then you can use the safe networks add and safe networks switch commands, for example:

$ safe networks add shared https://safe-vault-config.s3.eu-west-2.amazonaws.com/shared-vault/vault_connection_info.config Network 'shared' was added to the list. Connection information is located at 'https://safe-vault-config.s3.eu-west-2.amazonaws.com/shared-vault/vault_connection_info.config' $ safe networks switch shared Switching to 'shared' network... Fetching 'shared' network connection information from 'https://safe-vault-config.s3.eu-west-2.amazonaws.com/shared-vault/vault_connection_info.config' ... Successfully switched to 'shared' network in your system! You'll need to login again, and re-authorise the CLI, if you need write access to the 'shared' network

You can find full instructions on connecting to our Shared Vault, or running your own vault, here.

Vaults Phase 2

Project plan

We’ve been debugging away at the request timeout issue. As part of the investigation work, we spotted and resolved a Windows-only issue for the tests against a single section. It was caused by the client code that was using a fixed port for quic-p2p, which conflicted with the port being used by the vault.

With this issue resolved, the request timeout issue is further narrowed down to a storage request time out, which is within a single test that sends a number of requests simultaneously. The vaults in the section start voting for the transactions involved for these requests and the memory consumption peaks causing one or more of the vaults to crash. We are now breaking down the vault process to identify which components contribute to this memory peak and the potential ways to optimise them. We have a few options already, but, we’d like to make some more observations before we choose one of them.

SAFE API

Project plan

We are very excited to share a new release of the safe-api artifacts today. These new releases upgrade safe_client_libs to the very latest 0.13.0 version in safe-api v0.7.0 . We also published the corresponding new version of safe-nodejs (v0.7.0) which upgrades safe-api.

This release also comes with new versions of safe-cli (v0.7.2) and safe-authd (v0.0.2) , but most importantly, with these new versions, users can now install safe-authd using the CLI ( $ safe auth install ), and will also be able to update when new versions of authd are published ( $ safe auth update ). The authd binary is installed by default in the user’s home directory under ~/.safe/authd/ , which also makes it unnecessary for CLI users to specify the path of the safe-authd binary when invoking it with the safe-cli auth commands, and no need to set it in the system’s PATH either. We’ve updated the CLI User Guide with new details for the installation of safe-authd, so please refer to it for specific information, and of course, please share any problems/issues you may face.

As a parallel low priority task, we’ve also invested some time in trying to migrate the authd implementation to use the new Rust async / await syntax, just to see what issues we may face and start learning from them. We ended up having a first draft implementation of this migration working, and using the latest version of quinn. This migration also simplifies the authd implementation, making it cleaner and easier to maintain. We just paused this task for now as it needs quic-p2p and quinn to be upgraded in Vaults and SCL before we can proceed further.

In addition, we’ve been working on some other feature additions to the CLI which were suggested by the community, e.g. having files ls and files tree commands. We are also working on a xorurl decode command which will allow users to see all the information that is encoded in a XOR-URL, i.e. xorname, type tag, content type, native data type the targeted content is held on, etc.

We are aiming for more regular CLI releases now that we have our CI scripts capable of releasing just one artifact at a time if we wish to (plus CLI and authd update commands), therefore we hope new features merged to the master branch won’t wait too long before being released.

Labelled Data, Indexing and Token Authorisation

RFC, Project Plan

This week we’ve gotten our base token implementation refined some more in safe-nd, and this is now used in both Vaults and SAFE Client Libs, with added tests to check token generation, signing, basic caveats and token validity checks.

This is the end of the first batch of token work and should provide a good basis to actually make these tokens useful. While we’re not ready to be implementing labels yet, the next steps should involve actual caveat generation and checking for some operations, checking and adding more tests for this, before eventually being able to remove our current authentication code for these same operations.

Data Types Refinement

RFC, Project plan

The RFC was updated with a simplification of the concurrency control, which now distills the available data types down to three ( Blob | Map | Sequence ), available in two scopes ( Public | Private ), for a total of 6 flavours. You can read about the details here.

Additionally, since last week a PR was raised to safe-nd for the implementation of Sequence , and now the above change is included as well.

Community member @tfa has identified possible areas needing clarification in the RFC, and we are looking to update it over the coming days, to make it clearer what properties and capabilities are kept and what we will see additionally.

Data Hierarchy Refinement

RFC

@oetyng has been working on an extension to the Data Types Refinement RFC, which deals with the object hierarchy in the network.

Today community member @jlpell brought the discussion in the Data Types Refinement RFC, to this adjacent topic that we’ve been working on in parallel. There he lays out a variation of a possible object hierarchy.

Since this coincides with our current work, we’ve now posted the RFC on the forum, for the discussions to continue there. It’s early on in the process but we want to let you in earlier, so head over there and have a look!

RFC summary

The proposal describes a system where all data types are built of chunks, and together with decoupled metadata gives uniform data handling, for all types of data structures. This also solves the sustainability of the AppendOnlyData / MutableData types.

All content is built up from chunks. The chunk storage is organised in various types of data structures (also referred to as data types) where the data held in the structure points to a chunk or a set of chunks. The names of the data structures are the following:

Blob , whose structure is just a single blob of data.

, whose structure is just a single blob of data. Sequence , whose structure is a sequence of data.

, whose structure is a sequence of data. Map , whose structure is a set of unique keys, and a set of values of data, where a key maps to a value.

, whose structure is a set of unique keys, and a set of values of data, where a key maps to a value. A Shell instance holds information about a data type, such as the actual structure with the pointer(s) to the chunk(s), name, type tag, ownership and permissions.

In other words; a user can put any type of data in any structure, the chunking of the data works in the same way regardless of the data type.

SAFE Network App

Feature Tracker Board

This week we’ve been continuing the focus on the user experience of data permissions, and how they might be applied and managed against specific files, folders, or data ‘labels’.

So, a wee explainer of our thinking and what it’s converging on.

If we think about a user having a bunch of data that they own and control under their Account. That could be individual files, or folders containing files and other folders.

Or it could be labels , a way of grouping, organising and pointing to multiple different files and folders that can reside in various locations. A way of, say, creating a ‘smart folder’ which keeps track of all of your audio files, as an example.

And on the other side of things, we have apps and other users to which the user might like to expose their data, in order to get things done.

And then we give users tools, the ability to set permissions, to give them control over the exposure of their data, how it can be accessed, to what extent, by whom, and over what duration.

As an aside, it’s worth stressing the point here that unlike on the existing clearnet, using these controls to expose your data to an app isn’t the same as allowing another user access to it. It’s more like giving yourself authorisation to manipulate data using a specific application, which you could then use to allow visibility of that data to eyes other than your own, should you so choose.

So the UX design challenge becomes how exactly we enable users to set these permissions, exerting control over the exposure of individual files.

We have a way of setting the default permissions for a given app, which could be thought of as a series of toggle switches, allowing the app to view, edit, publish, and share data, and specify the duration that these permissions should be effective for.

So one option is if we extend this way of thinking. Users could alter these switches for a given app down at an individual file level, customising the exposure, and duration.

But the problem then becomes parsing and understanding what result a complex sequence of switches will produce.

What happens after I log out in this example? Does the app get back Share and Edit rights? or none at all?

So another way of thinking about this, is akin to manifests, like those used in aviation. A manifest would list out details of the aircraft, the destination, the passengers, cargo, and legal instruments noting ownership of the contents of the plane. No manifest, no flight.

If an app or another user needs access to some data, they require a manifest, listing their permissions, and their duration, together with other handy metadata.

This manifest could be created by default from global permissions, or granted explicitly, but it can be amended or deleted entirely by the user. No manifest, no dice.

It’s a subtle change, but it helped refine the thinking for how to approach the experience in general.

So, let’s take a look at some UI examples we’re putting together for the SAFE Network App.

Here we have a file detail page, and at the bottom we can see a sortable, filterable feed of all the apps and people that have some level of access to lions.jpg .

They all have a valid manifest.

I can go in and look at the detail of any one of these:

And I can go in and edit that manifest, removing the edit permission for example, adding in the share permission, choosing a duration for kicks.

That bottom control might look quite simple, and with the permission sets we have for the MVE it might well be, but moving things on from there it could be expanded to include more nuanced ‘recipes’ etc, for more specific control.

So I could see these manifests at a file, folder or label level, but of course I could also access them via an app view as well.

I might also get a request from an app (or another user) for a set of permissions. I’d get to preview the manifest, and edit it if needs be, before accepting it, or rejecting it outright.

This might also include metadata from the app developer, explaining why they need the permissions, aiding the user in their decision.

These requests, and manifests in general, could also potentially be managed via a global feed too, further aiding their findability, and giving the user a better overview of where they stand with their data, especially when things get a little more fine-grained. One to explore more as we get deeper in.

So there you have it, an overview of the work we’ve been doing on individual file and data permissions.

Shout out to @m3data on this, who’s doing some great work out there on the clearnet with similar problems of consumer consent, and whose inspiring blogs and papers were very helpful to absorb in the midst of all this.

SAFE Browser / SAFE Network App (desktop)

We’ve started a wee round of updates for both of these apps. We have made full releases of the previous betas for both (though with no new features). But meanwhile, we’re working on updating dependencies across the board and getting things ready for the next batch of safe-api updates. For the browser, there aren’t too many exciting things here (beyond the obvious excitement of an improved API ).

The SAFE Network App will be getting a bit more polish, with a wee feature to focus the app when it receives requests (simple, yet notable by its absence, this was leading to folk not noticing requests until too late). We’re also looking at how safe-authd is set up by the app, and how best this can work for power-users who may already have safe-cli and safe-authd installed.

As we’re getting towards a standardised install location for safe-cli , we’re settling on an approach that will see the SAFE Network App managing the CLI as any other desktop app (you’d still be able to manage it independently from the SAFE Network App if you want), detecting its install, or prompting to install it if it’s missing.

SAFE Browser / SAFE Authenticator (mobile)

We updated the README files of both repos and removed any outdated or alpha-2 related content. The authenticator repo’s README was modified to include an installation guide for users and the AppCenter download links and QR codes.

The CI setups of both repos were also updated to further simplify the automated build and release process. Both repos now support the auto-creation of GitHub releases on a version change PR along with creating new releases on AppCenter.

To conclude all the work we were doing in the past couple of weeks, we have started to test both these apps for a release and in parallel, we are fixing any bugs we came across during this testing.

SAFE App C#

Project plan

For the last year, we have been using the Azure DevOps CI service for safe_app_csharp builds and to run the tests on all the supported platforms. We were using a UI based setup and occasionally it seemed difficult to update the pipeline. Since we are using GitHub Actions for most of our repos and have become more comfortable using a YAML config files based CI setup, we have updated the repo to use a YAML based pipeline setup. This will make it easy for anyone to understand and modify the CI as needed.

We have enabled code coverage publishing to Coveralls which was disabled for the last couple of months as the new APIs were not stable enough. Some other changes were made to fix the documentation generation issue for the package, though we still have to enable the auto-publishing of the docs to GitHub pages, which we will be working on next week.

Safecoin Farming

RFC

This week we made some Sequence Diagrams for Safecoin transfers and continued iterating on Safecoin and Farming design discussions.

In particular, there has been some discussion/excitement around using conflict-free replicated data types (CRDT) for improved resilience to network errors, latency, out-of-order issues, etc. As such, work is beginning to model each CoinBalance as a CRDT with strictly additive credit/debit columns and periodic snapshots.

There has also been a brief, impromptu discussion around the idea of using Blinded Digital Bearer Certificates on the SAFE Network. Blinded Digital Bearer Certs (DBC) were first proposed by David Chaum in 1983 as a form of digital cash that provides total unlinkability between payments and can even be transferred offline. Not even the issuing authority knows who a given certificate was issued to. Certificates can be combined to form a new certificate of a larger amount or split to make smaller payments. Nick Szabo has a nice DBC writeup in his 1997 Contracts With Bearer paper. However, DBCs have always relied on a centralized, single-point-of-failure server which we can avoid due to group consensus (section Elders ).

In 2016, theymos (of Bitcointalk fame) made a post describing how DBCs could be used with a decentralized system such as Bitcoin. As such, it is an interesting thought experiment to consider if/how it could work in the SAFE Network either as a replacement for Safecoin, or as a complementary bearer/offline/mixing token. DBCs would have stronger privacy guarantees than mixed coins such as Monero or Zcash, and could level-up Safecoin’s privacy if adopted for that purpose. For now, it is only a fun thought experiment and no dev work has been planned. Community analysis, brainstorming welcome!

Node Ageing

Project plan

We progressed through the remaining parts of Elder promotion/demotion by addressing some failures. We also made significant progress toward closing another item to ensure a Relocating node can verify that the new section is genuine.

Additionally we cleaned up code further. In particular, we now avoid unnecessary deserialization and serialization of messages at each hop (relay nodes) for Reliable Message Delivery (RMD). We also improved verifying trust in Secure Message Delivery (SMD). Removing the trusted key from the message ensures that the receiving node held and trusted that key and it also avoids sending over the wire an unnecessary key.

Useful Links

Feel free to send us translations of this Dev Update and we’ll list them here:

As an open source project, we’re always looking for feedback, comments and community contributions - so don’t be shy, join in and let’s create the SAFE Network together