by Yin Wei

Foreword

The NEO node is a very complex project that includes functions such as P2P network, RPC interface, database read and write, and execution of smart contracts. Different functions work together to ensure that nodes synchronize data in time and provide accurate data to users. The node requests and obtains block data through the P2P network and verifies analyzes and stores them into the leveldb database. When this node is properly synchronized to the latest block height on the blockchain, it means that the node has saved a copy of on-chain data for the user to use.

As a developer, the first requirement for the use of nodes is to send the transaction through the node, and the second is to obtain the data in the node or simulate the execution of contracts through the RPC service. In actual usage scenarios, the need to obtain data or simulate the execution of contracts is a bit larger, and individual nodes sometimes can’t resist the pressure from the request. Because the RPC service and the database exist in one process, there is no case where multiple database load balancing is started to provide data to the interface server, and there is no case where multiple interface servers are started to obtain data from the same database.

When we develop some dApps, we will encounter bottlenecks in node RPC requests. When deploying multiple nodes for load balancing, since the time needed by each node to get the latest block data and stores it in leveldb varies, At a certain point in time, it is easy to cause different nodes to obtain the latest transaction data with different returns. Based on this, we made an assumption: to separate the database function from the node and make the database into a network database (if multiple databases need to be deployed, synchronize from the same source to ensure the uniformity of data in the database). And the RPC service and the virtual machine that executes the smart contract are constructed to be a lightweight node to provide interface services. Simply put, the storage part of the NEO node is changed to network storage, and the InvokeScript can be directly executed by the lightweight node to find the network database.

Project Introduction

Because data processing has transactional requirements, the required database must have the function of snapshots. The LevelDB used by the Neo node uses LSM storage to provide read snapshots at a very low cost. The relational database has less transactional support. The MongoDB we originally wanted to use was also discarded because of the limitations of the read snapshot feature. In the end, we chose to develop a local database for development based on rocksdb (rocksdb is an improved version of leveldb by Facebook, and the read and write performance is obviously improved), and add a network database to implement a network database.

When a node gets data stored in leveldb, it has a classification. Different correction prefixes are added to different data such as storage block number, utxo data, and contract data to distinguish them. This is a natural requirement to use leveldb. The kv database is a dictionary. What we store into it logically fall into several dictionaries. Now we provide this requirement directly at the database level by adding a concept of a table. When the user reads and writes, the value of a key in a certain table can be read more accurately. In addition, the storage of the database also mimics the structure of the blockchain. Each time when there is a write operation, a height is added to simulate the block concept, which is to correspond to the block height in the blockchain. In this way, the interface server can easily index with the block height and obtain the corresponding snapshot execution data.

We consider the node operation of analyzing and storing each block of data into leveldb as a set. It can be confirmed that different leveldb execution of this set will be able to store the same data. Based on this, we separate the read and write of the network database. The operation of storing each block of the master node into leveldb is recorded. The network database only needs to obtain the operation set of these blocks, and synchronizes the data by executing it once. With these network databases, our lightweight nodes can obtain data from these databases and return them to the caller.

Currently, this project is still in the exploration and development phase, here is the GitHub address of this project: https://github.com/NewEconoLab/NEL.LightDB

The following picture shows all the projects included in the project, NEL.Peer.* is the network layer; SDK is a packaged method for the convenience of client access; SimpleDB is the local database without additional network layer; API is a lightweight node that implements the simple RPC interface; Server is a network database.

Process demo

We first compile the Server project and open config.json for configuration.

Port is the port to access the database; bindAddress is the address that is allowed to access; server_storage_path is the path where the database data is stored locally; Conn_Track, DataBase_Track, Coll_Track These are the link and library name of MongoDB, We temporarily store the set of how each block should operate the database in our NEL mongodb for management. and the current openness is not very high. Subsequently, we may provide these sets of operations in another way for ease of use.

After the configuration is complete, start

The network database is recovering data quickly.

At this point, recompile the API project and configure config.json

Port is the port to access API, bindAddress is the IP that API allows accessing; dbServerPort is the port that database accesses, dbServerAddress is the IP where the database is located; dbServerPath is the path of the actor started by the database. (Make sure the linked database has been synced to the latest data height)

Start API service

At this point, we use the PostMan tool to get the data through the API project ( obtaining the block data, and simulating the contract execution example are also attached)