Development

GitHub metrics

Last commits on public GitHub were made on April 16th, 2019 in shift-js repository (JavaScript library for sending Shift transactions from the client or server).

A lot of the information is private at present. Shift has also private GitHub repositories.

Developer activity (from Coinlib.io)

By Shift Team in Newsletter on May 10th, 2019

After analyzing the network, it soon became apparent that the way that the relaying of data was set to function in Phoenix 0.5.6 did not sufficiently account for a variety of conditions under which cluster peers could function. Each peer uses its own peer list to relay messages, and is designed to not relay messages to peers that appear offline. Despite setting up the system so that this peer list is maintained and updated with the online/offline status of each peer based on health checks carried out by pinging at frequent intervals, the team found that if a peer went offline but the health check did not register that the peer had gone offline within a sufficient time window, then it resulted in relays being disrupted. As their devnet cluster had not experienced even close to the delays necessary to render it unstable, they had not anticipated the testnet health checks getting out of sync beyond their generous margins for error. It thus became clear that the Phoenix-core, which is responsible for data relaying, needs to be made less prone to asynchronous health checks.

Once the root of the issue was established, the team immediately began to design a new method of data relaying, while also maintaining their goal of creating an adaptable Phoenix-core (p2p library). For their own use case, Phoenix Cluster, they will be implementing a method for relaying that uses a gossip protocol and does not depend on the online/offline status of the peer list. In addition to refactoring the gossip code to Phoenix-core, the team has also decided to implement several other improvements that will contribute to mitigating the online/offline issue and improving the collection of cluster statistics. All these changes can be found below and, once they are all finalized, will be released together as a part of Phoenix v0.6.

Phoenix v0.6: Completed Subtasks

Add persistent public peer keys

With Phoenix 0.5.6, the public key of each peer was not persistent and would change on each occasion that a peer rejoined the cluster. The changing of keys made the relaying of data less efficient because it meant that when a peer was restarted, the new key had to be sent to all other peers. This was something that also contributed to the pre-existing relaying issue where the public key of the new peer was not being sent to all other peers, causing even more peers to fail to see each other. With some peers able to communicate but others not, with time and more reconnections the number of peers able to communicate gradually decreased as the network grew more and more out of sync.

Within Phoenix v0.6 peers have persistent keys.

Go data race prevention/locking

There is a possibility that a principal factor in the issue in which peers were getting out of sync may be the ‘go data race condition’. The peer list was being modified in multiple places within the code of Phoenix-core without its content being locked down, which meant that if the peer list itself changed, those changes were not reflected elsewhere within the goroutines.

Phoenix v0.6 now uses a lock for the peer list anytime a peer edits or reads from it, preventing the list from being changed.

Phoenix v0.6: To-Do List

Refactoring the message relaying code (gossip implementation)

With the gossip protocol a peer can relay a message to all peers in the network without the need to either have all the peers included in its peer list or know the online/offline status of all other peers. The flow of the relaying therefore becomes:

A peer relays a message to various peers on its peer list based on both proximity and randomness. A receiving peer checks if it knows the message ID and, depending on whether or not it recognizes the message ID, either ignores the message or relays the message to other peers on its own peer list that are not already part of the message chain.

As a result, each peer will receive any message that is relayed through the entire network, assuming each peer is known by at least one other peer.

Pushing cluster statistics at an interval

The way cluster statistics will be collected in Phoenix v0.6 will be optimized. Previously the /stats calls used by the blockchain peers would have the relevant storage peer initiate the entire process and call out to all other peers, thus being dependent on the state of the peer list. With this new procedure, each Phoenix peer will be set to push its storage state at a fixed interval. Assuming the gossip protocol is working correctly, all peers will then receive the storage status of all peers, which they will then sum to get a single value for the cluster size to be kept in memory. This way the /stats calls will be extremely fast due to the ability storage peers will possess to immediately reply with the latest value for the cluster size. If a storage peer fails to relay its storage status within the set amount of time, each other peer will stop including that peer in the sum. Thus, each peer will keep a record of the time of the last received storage state from each peer, saving the effort of having to query the entire network on each /stats request. Since the new method of collecting cluster statistics is a push operation, it means that peers just have to collate the data that they receive.

Making subsequent relays parallel

All initial relays within Phoenix 0.5.6 are made in parallel. However, after going through the Phoenix-core code again, the team discovered that subsequent relays following that initial relay message were happening in serial.

Within Phoenix v0.6 subsequent relays will also be made parallel, improving relaying speed markedly.

Adding an IPFS daemon auto-restarter

There seems to be an issue on the IPFS end, where the IPFS daemon sometimes crashes for an unknown reason. As this issue can be easily resolved through restarting the daemon, they shall be equipping Phoenix v0.6 with an auto-restarter. Phoenix-cluster will check at a predetermined interval if the IPFS daemon is running and will restart it in the case that it is not.

As the team plans on updating their custom IPFS daemon to include components of the latest ipfs version shortly, this issue may well soon be fundamentally resolved. Nevertheless, they believe an auto-restarter will be a useful addition in ensuring network integrity.

When?

The team has very much appreciated the patience of you, the community, during these past two weeks. They would like to assure you that they are making good progress, having already completed two significant tasks and now moved into the implementation of the new gossip protocol. Next week, they plan on launching Phoenix v0.6 on the devnet, with the possibility of then moving quickly to the public testnet cluster release phase should no issues emerge. That said, as they learned during their last public release, transitioning from a small, closed network to a large public network can result in the emergence of unanticipated issues. The team therefore asks for your continued understanding as they carry out this vital testing protocol together

To stay up to date on all the latest project developments, download the Blockfolio app.

From Official Discord channel:

Ralf S (Shift President and Lead Developer) on February 1st, 2019:

“In response to the market downturn, we began minimizing team expenses a good deal of time ago. This is perhaps something that has been evident in our decision to focus on development rather than marketing. In light of that decision, we have been able to sustain progress on the Shift Project, and have the funds necessary to continue in this manner long term, if necessary.”

From Official Ryver chat:

Ralf S (Shift President and Lead Developer) on December 20th, 2018:

“Still making progress on the backend every day. But I admit it’s an awful lot of work. We’ve received various request in DM to show something to the public. But it’s mainly code work we’re doing now, no eye candy. But I guess I can share a bit of what we’re working on right now. I will paste some screens in our Discord channel.”