Decentralized Apps (a.k.a Dapps) are something we’re very familiar with at Theorem. It’s been a strong theme of research for us over the last year and something we’re starting to write more and more about.

Learning about decentralized technologies has not always been easy though, and there are several key concepts one must understand before being able to build Dapps. In this post we’ll go over some of the concepts we found most useful to know about.

Dapps

Currently the vast majority of web software applications follow a centralized server-client model. These apps have centralized servers to run their backend code, in contrast, Dapps have its backend code running on a decentralized peer-to-peer network. You can read more about the details of how Dapps work in our previous post State of Frontend development with IPFS in 2017.

Decentralized data distribution

IPFS aims to replace HTTP

Why @ IPFS

IPFS is a peer-to-peer hypermedia protocol and they want to replace HTTP by making it easier and more efficient to deliver high volumes of data.

In the current model, when a system requests a piece of information they use the DNS to find the centralized server that holds that specific piece of data. With IPFS, data does not have to come from one central server–no need to fetch a file from a server potentially far away, if a nearby node has the file stored locally and can distribute the data.

Groups of connected peers are referred to as swarms. When a file is available to one peer in a swarm, it is available to the rest of the swarm. IPFS’s tools aim at making it possible to connect all computing devices so that they share the same system of files. Using IPFS and their tools groups of connected nodes can upload files and distribute them to the rest of the nodes in an efficient way.

In a nutshell, using IPFS a peer can upload a file to the network and any other peer will have access to the file. When a peer requests such file, it will be distributed using peer-to-peer tools.

Shared Resources

Peer-to-peer file sharing protocols work in a way that data can be requested from more than one peer at a time. They do this by splitting the data into smaller chunks so that a peer asking for some information request the smaller parts from several places and then put them together when they receive all of it. One of the strategies in these protocols requires peers to hold on to the chunks of information even after the request for such data has been fulfilled. This means the peer will use up some of its available disk space in order to store the information. The peer will be sharing its storage resources back to the swarm.

When a node asks for the some data, the peer holding such piece of information will distribute the data using up some of its available bandwidth. In this case, the peer will be sharing its bandwidth resource to the swarm.

This behaviour represents a group of nodes that share resources to benefit the group itself. The swarm of interconnected nodes share their independent resources back with the rest of them. Each node in a decentralized app shares disk space and bandwidth with the network it connects to. Users can upload data to the network and every other member can replicate such data by downloading and storing them locally and help redistribute the data to nodes that join the network and request the given piece of information.

By leveraging the independent resources in the swarm of interconnected computers it is possible to distribute tasks amongst them. The computational power of the whole network can be used for tasks that require the extra power. Picture the swarm acting as a botnet, all nodes can receive commands in real-time and execute them—an appropriate layer of command hierarchy should be implemented to avoid misuse.

A strategy to maintain trust needs to be implemented since in a distributed swarm there is no control over the independent nodes, and they can potentially not perform the tasks they are given.

Interconnected nodes in a swarm may need to maintain a shared data structure amongst them. The nodes may also need to modify the data by adding, removing or updating the information. If such changes occur, the whole swarm needs to be notified of the changes so that every node contains the most recent version of the data structure. By using CRDTs these problems can be solved.

CRDTs also normalize the data state when modifications to the data take place by different nodes at the same time. Changes can occur independently and then be processed serially to reach the current state of the data by each node.

There are different types of CRDT’s and they follow different strategies. Summary of CRDTs provides a good overview of some of the implementation details of the different types.

CRDTs can be used for either backend or Frontend apps, or even combinations of both. These structures normalize data between the different apps no matter if the app’s code runs in a browser or a server. In a decentralized network nodes can be of either type as well, it doesn’t really matter if they’re frontend or backend apps. Nodes in a decentralized network can use CRDTs to sync data strcutures, receive updates and keep shared data normalized.

One possible solution to work with IPFS is YJS . It can help IPFS swarms share a data structure with different data types (YJS custom types). There are other tools as well like gun and IPFS log. Each with their own level of maturity, communities and ongoing development.

Privacy & Authentication

Federated identity hit a wall

Drummond Reed @ Blockchain Identity

In a decentralized system with no central authority, how can you trust someone saying who they are? And if you cannot verify a node’s identity, how can you trust the operations the node has executed?

Current systems that we use for identity use centralized servers and these systems provide central points of attack–even worse, these centralized systems can, if they chose to, impersonate andy of the users they provide identity for.

Who owns such identities? Who owns the reputation associated with such identities?

The way current systems work, users cannot retrieve their identities/reputations from the identity providers and use them elsewhere.

The aim for decentralized identity is to come up with a solution that moves all “identity” stored on external parties (which hold the trust associated with them) to a system where the “identity” is stored with the user.

Is a global solution good enough? Can there be a big enough solution for this problems? Or is it only going to create another identity sylo? The Identity Foundation and Sovrin aim at creating such system.

We came up with a layer that can be used today, with current centralized identity systems and that could be integrated with such a final decentralized identity system: Private decentralized swarms.

When a node is initiated it reaches out to several signaling servers. These servers help the node find and connect to other nodes. By removing the IPFS default signaling servers configuration a node will not be able connect to any other node. Then, by configuring the node to reach out to a custom signaling server we can control which nodes it connects to. This way we can achieve a private swarm of IPFS nodes which will include all the IPFS protocols & tools. Any type of authentication layer can be integrated into the signaling server (SSH keys, JWT tokens, HTTP credentials, etc) to control which nodes can connect to it. This last step does not provide a decentralized identity layer but can work out of the box for SSO (Single Sign-On) services.

Next steps

Now that you’ve read through the key concepts for decentralized apps go build some and then come back and tell us about it. We are eager to learn what systems can be built with distributed protocols and what other problems these will solve.