For the final article in our index synchronization feature, we’ll look at the implementation of the new index synchronization mechanism that we introduced in the last article and show initial test results.

Following consensus between developers on the suggestion shared by COZ developer Ixje, an updated proposal for the new sync mechanism was opened for discussion. The proposal provided a draft design for the solution, which has now been successfully implemented. The new sync process will be merged into the Neo3 codebase following pending node health and security mechanisms.

SyncManager

The new synchronization logic is facilitated by the creation of a SyncManager class, responsible for managing tasks related to block synchronization such as monitoring other nodes for block height changes. If a difference in block height is observed, a number of synchronous tasks are assigned to begin the synchronization process.

These tasks are able to process up to 50 blocks each, and include a particular block index to start from and end at. These indexes are used to specify which block data to request from other nodes. Once the new block data has been received, the node can verify and persist it, informing the SyncManager of the changes and completing the current synchronization tasks.

In the event that an invalid block has been detected, the SyncManager is notified of its index, which it can use to determine the node that has sent the invalid block. At this point, a new task is created to request new block data, and the faulty node will be replaced by a new healthy node.

This mechanism ensures that nodes do not persist invalid blocks, and is one of several node health assessment functions that will be included as part of the implementation.

In addition to faulty block data, other attributes will be monitored to help assess node health. Examples of these attributes provided by neo-python maintainer Ixje include the time it takes for a node to deliver data (calculated by payload size), the node’s average request time (assists with load balancing), and a timeout counter that will disconnect a node that takes too long to reply.

Test results

Following the implementation, senior QA engineer for NGD Shanghai, Xiaoyun Yang, shared tests of the new synchronization mechanism in comparison to the header-first approach used by Neo2.

In a single consensus node network with a little over 21,000 blocks, a Neo2 node without the StatesDumper plugin installed took approximately 23 ms to synchronize each block. With the plugin enabled, which dumps status data such as storage modifications for debugging, this increased to approximately 36 ms per block.

For the Neo3 node with the new index sync implementation, a substantial decrease in synchronization time was observed. Without StatesDumper, the Neo3 node spent approximately 4.2 ms per block synchronized, completing synchronization in a little under one fifth of the time. With the plugin enabled, around 4.7 ms were spent on each block, corresponding to over a 7x reduction in sync time compared to Neo2.

These tests show the notable improvements to network efficiency provided by the changes. Coupled with the improved handling of peers through node health metrics, these changes are expected to greatly improve the speed and UX of block synchronization on the Neo3 blockchain.