Elastos In A Nutshell: Carrier Network Part 3 (of 3)

What is a Peer-to-Peer Network

Part 1 provided a general overview of Networking, the Internet, and its most popular application — the World Wide Web. It then outlined some of the major flaws inherent in the design of the internet, which led to the ubiquitous use of the client-server model, centralizing the internet and introducing new security risks. Fortunately, there is another networking architecture called a Peer-to-Peer network that offers an alternative to the problematic client-server model.

A Peer-to-Peer (P2P) network is a group of nodes that are linked together in a manner where the permissions and responsibilities for processing data are equal among all nodes. In a P2P network, there are no special nodes that are mandated the exclusive role of a server; this is the fundamental difference between a Client-Server (C-S) Network Model and a Peer-To-Peer Network Model. In general, each connected machine in a P2P Network has the same rights as its peers, and can be used for the same purposes.

Benefits of Peer-to-Peer (P2P) Networks

Every P2P network is implemented differently, but each provides the same general advantages. Let’s briefly discuss the benefits of the P2P Network Model.

Fault Tolerance – Because all nodes are equal to one another in a Peer-to-Peer Network, when one node goes down, it is easy to utilize another node. This makes P2P networks Tolerant to Faults within the network, as there are many redundancies that prevent interruptions when a few nodes become faulty. In other words, there are no central points of failure in a P2P network. In a Client-Server Network Model, when the server goes down, the entire network is no longer able to function.

– Because all nodes are equal to one another in a Peer-to-Peer Network, when one node goes down, it is easy to utilize another node. This makes P2P networks Tolerant to Faults within the network, as there are many redundancies that prevent interruptions when a few nodes become faulty. In other words, there are in a P2P network. In a Client-Server Network Model, when the server goes down, the entire network is no longer able to function. Traffic Distribution – Since Internet traffic runs through many paths instead of funneling through a few servers, If one route becomes bogged down, the network can easily redistribute the traffic load to nodes that are less congested. In the Client-Server model, all traffic runs through a few servers which can quickly become congested during times of high demand. For clients, no matter how fast their Internet connection, speed is limited by the servers from which they are requesting data, which can become bottlenecks in the network.

– Since Internet traffic runs through many paths instead of funneling through a few servers, If one route becomes bogged down, the network can easily redistribute the traffic load to nodes that are less congested. In the Client-Server model, all traffic runs through a few servers which can quickly become congested during times of high demand. For clients, no matter how fast their Internet connection, speed is limited by the servers from which they are requesting data, which can become bottlenecks in the network. Faster Downloads – Because any node within a P2P network can act as a server, it is possible to have faster and more efficient downloads, especially as it relates to Content Distribution Networks (CDNs). Synchronously downloading data from many nodes within a P2P network provides advantages in certain scenarios when compared to all nodes downloading data from one massive server.

– Because any node within a P2P network can act as a server, it is possible to have faster and more efficient downloads, especially as it relates to Content Distribution Networks (CDNs). Synchronously downloading data from many nodes within a P2P network provides advantages in certain scenarios when compared to all nodes downloading data from one massive server. Redistribution of Power – Those who own the servers command all the power in a C-S model. This is not the case in a P2P network, where power is typically distributed evenly between nodes, and each node follows the rules of the network. No peer can override its privileges, and there is no central authority dictating who can or cannot gain access to requested data. This makes P2P networks censorship resistant and privacy friendly, and also prevents data hoarding among a few entities.

Cons of Peer-to-Peer (P2P) Networks

P2P networks also have some negative aspects. We will explain later how the Elastos Carrier remedies these issues, which are in large part what hold back P2P networks from more widespread use.

Illegal Activity – Due to the fact that there is no central authority, no one is able to prevent the transmission and sharing of illegal materials. This lack of oversight is what has led to many black market practitioners to utilize these sort of “dark web” P2P networks to conduct illegal activities. Of course, this is an unintended attribute of the system design. In contrast, if there is a central monitoring authority, it has the power to censor and invade the privacy of participants as well. Backing Up Data – With no central server,each node in the network is responsible for backing up its own files and data. Spread of Malware – In a P2P network, files are often transferred between two non-trusting peers. This presents an opportunity for hackers to spread malware, as there is no central monitoring authority.

Now that you have a general understanding of what a P2P network is, and an idea of the benefits of such networks, we can move on to talk about the specific implementation used by Elastos Carrier.

Tox Protocol

Elastos Carrier builds upon the Tox communications protocol. In this article, we will take a dive into the Tox protocol, and subsequently learn how Elastos Carrier differs from and improves upon Tox.

Hash Tables and Distributed Hash Tables

At the core of Tox and Elastos Carrier sits what is called a Distributed Hash Table (DHT). These structures have achieved widespread use among P2P networks, including BiTorrent, YaCy search engine, and many content distribution networks. To understand how a DHT works, it is best to first learn about regular Hash Tables.

Hash Table

A hash table is essentially a database where each entry is a (Key, Value) pair. Each value stored in the hash table has an associated key which is used to retrieve the value. However, instead of just storing the actual keys with their associated values, a hash table stores the hash of the keys, and maps these to the values, thus earn it its name. Hashing the keys makes searching for values more efficient, as hashes can be used as indexes. Anyone with a key can query the hash table, which will return the associated value.

Take the above figure as an example. Here, a phone book database is implemented as a hash table where the names are the keys and the associated phone numbers are the values. For example, in order to search for John Smith’s phone number, a user must input his name into the hash function, which then gives the index (in this case, “02”) needed to retrieve his phone number. This is relatively simple to implement in a centralized manner, as you can store the entire hash table in one server. Anyone needing to retrieve a value from a certain key can simply query a single static server.

Distributed Hash Table

A Distributed Hash Table (DHT) aims to achieve the same functionality, but in a decentralized manner. The aim is to give every participant the ability to store (Key, Value) pairs, and query the DHT for these values without using any servers. If there are no central servers, then where is the DHT stored? As you might have guessed, the DHT is distributed throughout the network’s population of nodes. However, it is not feasible for each node to store and continuously update the entire DHT. This would require way too much space for nodes that are mostly end users, rather than large, dedicated servers. Instead, each node is responsible for a smaller subset of (Key, Value) pairs. Again, this is why it is called a distributed hash table, as the entries are spread across many different nodes.

Since there is no single central server containing the whole DHT, nodes need a reliable method for finding which peers contain the (Key, Value) pairs they are searching for. This means DHTs need routing algorithms to help nodes connect with relevant peers. To facilitate this connection, nodes in the DHT store a subset of node contact information in a routing table. Usually, this contact information comprises a port number and IP address.

Since nodes are constantly joining and exiting the network, routing tables change continually and a DHT must be able to self-update in an efficient manner. As such, a DHT can be likened to a self-organizing body of nodes which constantly update the information they have about the network [1]. Because DHTs provide a serverless hash table with P2P routing, they are most often used in P2P applications. As it relates to Tox, the main use of a DHT is for the routing algorithm, which allows peers to securely establish connections between one another without the need for a central server.

Kademlia DHT

Tox is based off of the Kademlia DHT implementation. To appreciate Tox and Carrier, it is necessary to understand Kademlia.

Kademlia – Layman’s Definition: The protocol defines a notion of “distance” that gives nodes a means by which to measure how closely connected two NodeIDs are. Each node stores a subset of node information in its own routing table, much like a contact list. When a node wants to get the contact information of a node that it does not directly store in its contact list, it asks the nodes in its routing table which are “closest” to the target NodeID if they have information about the node in question. If none do, they will return information of the 8 nodes in their respective contact lists that are “closest” to the target NodeID.

Thus, the process is iterative, and designed in such a way to quickly converge on the target NodeID without requiring individual nodes to store too much information about others. Each node’s routing table self-updates in a way that gives its a greater density of information about nodes closer to its own NodeID. This allows a search to rapidly converge on closer and closer nodes to the target NodeID. Kademlia is mathematically proven to be highly efficient, which has led to its popular use in peer-to-peer(p2p) applications. For instance, the BitTorrent DHT is based off Kademlia.

Kademlia – In-Depth Explanation

Terminology:

DHT node – Anyone or anything assigned a NodeID in the network.

– Anyone or anything assigned a NodeID in the network. Peer – Any node other than the current node under discussion.

– Any node other than the current node under discussion. NodeID – Each node is assigned a temporary public/private key pair and long-term public/private key pair. Each time a node initializes the Tox client, it is assigned a new, temporary DHT key pair. This temporary DHT public key is called the NodeID. The long term public key is essentially the ToxID – much like a username, while the long term private key is essentially the password. As it relates to Tox and Elastos Carrier, long term public/private key pairs can be issued by the DID sidechain.

NodeID (DHT Key Pair) VS. Long Term Key Pair

The first thing to note is that the Tox DHT is public. Each NodeID is publicly associated with an IP address and UDP port number. For reasons of anonymity, censorship resistance, and anti-tracking, individuals’ long term internet identities should not be publicly associated to their IP addresses. This is one of the main reasons why there is a temporary key pair in addition to a long term key pair. The long term key pair is basically a username and password which can be associated with a user’s real identity if he or she so chooses. Tox aims to avoid non-friends from being able to associate a user’s long term key with his/her IP address.

This is where the NodeID comes in, which makes it so that non-friends can only associate an IP address with a temporary NodeID. An eavesdropper might track a particular NodeID, but it would have no idea who is associated with that NodeID. Furthermore, nodes are assigned new NodeIDs each time they reopen a Tox client. This makes it very difficult to track with whom a particular person is connecting and therefore to gather any inferences that might interest a malicious entity

The second major reason Tox uses temporary DHT key pairs is for something called Forward Secrecy. Forward Secrecy means that if a user’s keys are compromised, it will not compromise the encryption of previous communications. To demonstrate, pretend that there is only one key pair, which is the user’s long term key pair. If this key pair is compromised at any point, a malicious entity can decrypt all previous messages that it may have intercepted and stored at a past time. Using a temporary DHT key to encrypt packets prevents this. If a user’s long term key is compromised, a malicious entity is not able to decrypt previously sent packets, as they would have been encrypted using a temporary session key. Even if a session key is compromised, the malicious entity would only be able to decrypt messages sent during that particular session. All other sessions would have different associated session keys.

Distance Metric

Before we dig any deeper, the notion of distance in a DHT needs explaining. Crucial to the Kademlia DHT is the concept of “closeness” between two different NodeIDs. For some context, remember that DHT nodes only store a small subset of the total node info. So if a node does not store the information associated with the peer they are trying to connect with, how do they obtain that information? This is where the routing algorithm becomes important, which varies based on the implementation of the DHT.

The Kademlia DHT implements an iterative lookup algorithm wherein a node essentially plays the childhood game of “warmer-colder” to find a target NodeID. A node will ask some of the peers in its routing table if they have information about the target NodeID they are looking for. If they don’t, these peers will return the NodeIDs of peers which are “closer” to the target NodeID. The details of the routing algorithm are left out of this article but just know that there is a way to measure how “close” two NodeIDs are from each other, and it is not by geographical distance.

Routing Algorithm

Now, it is important to explain the DHT’s main role in the TOX protocol: routing between peers.

To start, lets see what information is associated with each node in the network. Every DHT node has the following state:

DHT Key Pair: Each node is assigned a temporary DHT key pair on opening of the Tox client. The Key Pair used to communicate with other DHT nodes. It is constant throughout the lifetime of the DHT node – from opening to closing the client. The public DHT key represents the NodeID. Each time a user opens the Tox client, it is assigned a new DHT key pair.

Each node is assigned a temporary DHT key pair on opening of the Tox client. The Key Pair used to communicate with other DHT nodes. It is constant throughout the lifetime of the DHT node – from opening to closing the client. The public DHT key represents the NodeID. Each time a user opens the Tox client, it is assigned a new DHT key pair. Node Info : The data publicly associated with each node is (NodeID, IP Address, and UDP port).

: The data publicly associated with each node is (NodeID, IP Address, and UDP port). DHT “Close” List : Each node stores a routing table that contains information about peers that are “close” to the node’s own base key (NodeID). The information stored about each node on the list is (NodeID, IP Address, and UDP port). The Close List is represented as an array of “k-bucket” data structures.

: Each node stores a routing table that contains information about peers that are “close” to the node’s own base key (NodeID). The information stored about each node on the list is (NodeID, IP Address, and UDP port). The Close List is represented as an array of “k-bucket” data structures. DHT Friends List : Each node stores a list containing the current NodeIDs of all of its online friends.

: Each node stores a list containing the current NodeIDs of all of its online friends. DHT “Close” to Friend’s NodeID List: Each node stores a list containing the (IP Address, UDP port, NodeID) of the 8 “closest” peers to each of its friend’s NodeIDs. This allows friends to quickly locate one another.

The purpose of the DHT in Tox is to facilitate direct connections between whitelisted ToxIDs. The IP address and UDP port number serve as sufficient information to make a direct connection between any two peers. Instead of using servers to store all the IP addresses and UDP ports associated with each NodeID, each node in the DHT stores a routing table that contains a subset of the total node information. Let’s see what these routing tables look like.

Each node creates a routing table where it stores the IP Address, UDP port, and NodeID for each peer in the table. Each node’s routing table stores this information in 256 different “k-bucket” data structures. For the purpose of this writing, each k-bucket can be considered a sub-table that contains up to a maximum of 8 entries. Each bucket stores the information of NodeIDs at various distances to the NodeID of the routing table owner.

Since there are 256 buckets, with a maximum of 8 NodeIDs in each bucket, each node can store a maximum of (8*256)=2,048 entries. Since routing tables are reinitialized after each restart of the Tox client, and nodes are constantly entering and leaving the network, routing tables are rarely full.

Now that we know how the routing table is structured, we can move onto the routing algorithm of the Tox DHT. The routing algorithm determines how nodes find other peers which they do not store in their own routing table. The FIND_NODE request and subsequent responses make up the majority of the routing algorithm. The input of the FIND_NODE request is the NodeID being searched for. The response to a FIND_NODE request contains the NodeIDs closest to the target NodeID.

FIND_NODE request works as follows:

When a node wants to connect with another node in the DHT, it will search its routing table for the k-bucket with the longest matching prefix of the target NodeID. The node will then send three FIND_NODE requests to the nodes in that k-bucket The nodes receiving these requests will return to the requester the 8 “closest” NodeIDs to the target NodeID that each has in its respective routing table. The requester will then update his/her routing table with the information of up to 24 new nodes gathered from the FIND_NODE responses. Subsequently, the requester will select from among the 24 nodes returned in the FIND_NODE responses the NodeIDs “closest” to the target NodeID, and then send three more FIND_NODE requests to these nodes. This iterative process continues until a node has the target NodeID in its routing table, in which case the contact information for the target NodeID will be returned, and the process terminates. With the way k-buckets are populated and updated, nodes have more information about the nodes “closest” to them. As a result, FIND_NODE requests will naturally converge on a given NodeID, as after each iteration the FIND_NODE requests are sent to nodes with more information about nodes closer to the target NodeID.

FIND_NODE Summary: The net result is that the input to the FIND_NODE request is a NodeID, and the return is the IP Address and UDP port of the inputted target NodeID. The entire process takes place peer to peer – that is, without any servers. If not for peer-to-peer routing, centralized servers would be needed to facilitate this type of lookup.

Populating and updating the routing table

When a node is first initialized, it has no contacts in its routing table. The first thing a node will do upon initialization is ping a bootstrap node. A bootstrap node is a well known Supernode with a known static IP address. After successfully pinging a bootstrap node, the node will enter the bootstrap NodeID into its routing table. The node will then send a FIND_NODE request to the bootstrap node in search of its own NodeID.

The bootstrap node will return the 8 “closest” nodes to the newly initialized node’s own NodeID, which will then enter these 8 nodes into its routing table. Subsequently, the new node will contact these peers and receive the 8 “closest” nodes to its own NodeID from each of them. When this process terminates by converging on the node’s own NodeID, it will have enough entries in its routing table to function properly.

Throughout the lifetime of a node, its routing table will continue to fill up. Every FIND_NODE request initiated by a node will return the information of many nodes which will be entered into its routing table. Tox requires nodes to send out FIND_NODE requests to random nodes every 20 seconds in search of their respective NodeIDs so as to continuously update and populate their routing tables.

Friends And The ToxID

Tox is a friend to friend (F2F) communications paradigm, meaning that only friends can make connections with each other. The term “friend” is used loosely, and is used to refer to whitelisted ToxIDs. All routing and encryption mentioned thus far has been using the temporary DHT key pair. The FIND_NODE request simply allows users to find certain NodeIDs in the DHT. However, Tox users know one other by their ToxIDs, which serve as usernames. The majority of a ToxID is made up of user’s long term public key. For the ease of explanation, the ToxID and the long term public key can be considered one and the same.

Recall that the DHT is public, which makes anonymity difficult to realize. As stated earlier, Tox decided to use temporary keys for each session to counter this. The data that is stored publicly for each node is (NodeID, IP, UDP Port). To connect with a node in the DHT, one simply needs to know its NodeID to find its IP address and UDP port.

However, friends identify one other by their ToxIDs, not temporary DHT public keys. You might ask, why not just store the (ToxID, IP address, UDP port) in the public DHT? If this were the case, it would be easier to track, surveil, or attack a certain ToxID because the associated IP address and UDP port would be public. With the temporary NodeID, nobody knows who it belongs to, so associating it with an IP address and UDP port is harmless. To account for all this, Tox devised a way to allow friends to anonymously exchange NodeIDs by knowing one another’s ToxIDs. Thus, the matter in question becomes: when a node wants to connect with a friend, how does it determine the ephemeral NodeID of that friend given that it changes each time the client is restarted? This is accomplished through a process called “announcing.”

Announcing

When a node first joins the DHT, it announces its ToxID to the network. Specifically, a node will announce its ToxID to the peers whose NodeIDs are “closest” to its own ToxID. The requesting node will then ask these peers to store their ToxIDs so that friends can find them through these peers. The key is that this process must be conducted anonymously. The peers storing the node’s ToxID cannot know from which NodeID the announced store packet came from;otherwise, they can easily associate the ToxID with an IP address and port (remember, knowing a NodeID enables a user to find the associated IP/port). To allow a node to announce its public key to peers anonymously, Tox uses what are called Onion Paths. Let’s see how Onion Paths can help achieve anonymity in the Tox network.

Onion Routing

When a node announces its real public key to non-friend peers, they cannot know which NodeID the announce packet is coming from, otherwise they could associate an IP address/Port with a ToxID (long term public key), which is likely to be tied to a user’s real identity. This does not lend to anti-surveillance, and still allows for tracking within the Tox network.

Tox uses Onion Routing to ensure that peers who are not friends cannot associate a DHT public key of a user with his/her ToxID. It also allows nodes to announce their ToxIDs to peers in the network without these peers knowing which NodeID made the announcement. The goal is to allow friends to tell each other what their current DHT public keys are so that a direct connection can be made.

Onion Routing’s name stems from the fact that it uses several layers of encryption to hide a payload. Nodes can send packets anonymously to a destination peer using Onion Paths, each of which consists of 3 intermediate hops. At each hop, a layer of encryption is “peeled off,” much like the layers of an onion. Let’s call the sending node A, the three peer node hops B,C,D, and the destination node E.

Announce Store Request/Response

To review:, friends know one other by their ToxIDs, but make direct connections based on their DHT NodeID. In order to find the NodeID, Node A picks a peer from its own routing table whose NodeID is “closest” to its own long term public key, creates an onion path, and then sends an “Announce Store Request” to this peer. The peer receives this packet, finds the NodeIDs it has in its routing table that are “closer” to Node A’s long term public key, and sends this information back through the same onion path. Node A then sends additional “Announce Store Requests” to the returned NodeIDs from the previous step through onion paths. This iterative process continues until the response does not contain any NodeIDs that are “closer” to the long term public key of Node A. At this point, Node A will ask the “closest” NodeIDs from the last response to store its long term public key and some other contact information in memory.

Let’s see how this works step by step.

Announce Store Request/Response:

Node A chooses 3 random nodes from its routing table, which we will call nodes B,C,D. The ”Announce Store Request” packet contains the long term public key of node A along with some additional information. The packet is encrypted using the long term private key of Node A and the DHT public key of Node E. Node A will nest the Announce Store Request packet in three separate layers, each of which is encrypted using three temporary key pairs generated just for the specific onion path. Layer 1 is encrypted using one of the temporary private keys and the temporary DHT public key of Node D.,Layer 2 is encrypted using another temporary private key and the temporary DHT public key of Node C, and Layer 3 is encrypted using the last of the temporary private keys and the temporary DHT public key of Node B. Nodes B,C, and D can only decrypt a single layer which allows them to see the IP/Port of the next hop in the path. At each hop, the node decrypting the layer also encrypts the IP/Port of the previous hop and adds it to the “send back” data. The send back data allows the destination node to send its response back through the same onion path. When Node E receives the payload, it finds the NodeIDs (max of 4) that it has in its routing table that are “closest” to the provided long term public key of Node A. It then includes these NodeIDs in the Announce Store Response packet. The response uses the same onion path in reverse. When Node A receives the response, the process repeats, this time using the returned NodeIDs as the destination nodes. Node A then sends Announce Store Request packets through the onion path to these nodes. This iterative process continues until Node A finds the NodeIDs closest to its own long term public key. At this point, Node A sends Announce Store Requests to these nodes with a special ping_id, signifying that these nodes should store the received long term public key in memory. These nodes also store some information that allows them to route data packets to Node A when friends come searching.

Announce Search Request/Response Overview

The Announce Search Request/Response packets are used by nodes to search for the peers in the network that are storing the long term public keys of friends they are trying to connect with. These packets work almost identically to the Announce Store Request/Response packets. Instead of a node searching for NodeIDs near their own long term public key, they search for the NodeIDs near the long term public key of the friend they are trying to connect with.

In this iterative process, steps 1-8 are practically identical for the search packets. Each receiving node of an Announce Search Request will return the “closest” NodeIDs to the long term public key of the friend being searched for. This is conducted iteratively until one of the peers storing the long term public key of the friend being searched for is found. The peer is then sent a “Data to Route” Request packet.

Data to Route Request/Response packet

For instance, say Alice is trying to connect to her friend Bob. Upon joining the network, Bob already used Announce Store Request packets to announce his long term public key in addition to some routing information to a few peers. Alice then uses Announce Search Request packets to locate these peers. At this point, Alice wants to send Bob her NodeID, which she can do by using Data to Route Request packets. These packets are sent to the peers she found using the Announce Search Request packets, each of which contains Alice’s own NodeID along with a few other NodeIDs “close” to her own. Thanks to this mechanism, Bob can find her faster in the DHT.

The Data to Route Request packet is sent through an Onion Path to the peer storing Bob’s long term key. The peer receiving this packet then repackages it into a Data to Route Response packet. Since this node has stored the necessary information to send packets to Bob, it can forward the Data to Route Response packet to Bob through an onion path. Bob then receives the Data to Route Response packet, decrypts it, and receives Alice’s NodeID (along with some other NodeIDs close to Alice’s). Bob can then issue a FIND_NODE request using Alice’s NodeID, of which the return will be Alice’s IP Address and UDP port number. Finally, Bob can connect directly to Alice, and the friend connection process is complete.

Process: