Author: Evgeniy Kulikov, kim@nspcc.ru

PrivateNet is the NEO blockchain private network in which you can freely test smart contracts, dApps and study the blockchain. My task is to deploy the environment with the NEO PrivateNet for the decentralized distributed storage platform project. As a result, I would like to have:

docker-compose with the possibility to run a necessary service or all services

auto-import of a smart contract, which leads to a fully prepared environment for work

the possibility to watch logs of the NEO blockchain consensus node

the possibility to edit settings quickly

There is a ready solution of the environment from CityOfZion in which the following components are launched:

four NEO consensus nodes (C#) with the environment of NEO Python

PostgreSQL as storage for NEO Scan

NEO Scan for browsing blockchain

This is a convenient way to launch the NEO PrivateNet. The PrivateNet has several test wallets with a large number of assets (NEO & GAS) necessary for testing smart contracts and/or dApps. In addition, a ready solution allows bypassing the steps of creating own blockchain.

Thus, you can raise the environment locally for simple experiments:

Six ports are exported:

four p2p ports (20333–20336, ports that provide interaction between consensus nodes and allow to synchronize the blockchain)

four ports of the RPC server (30333–30336)

Access to wallets is given by passwords:

node1: one

node2: two

node3: three

node4: four

np-prompt: coz

With such simple actions, you can deploy a full-fledged environment for testing, but it does not solve another problem, namely preparing a fully working PrivateNet environment for work. We already have:

four NEO consensus nodes (C#) with the environment of NEO Python

PostgreSQL as storage for NEO Scan

NEO Scan for browsing blockchain

four NEO blockchain wallets

It remains to understand how to auto-import the smart contract and watch logs of the NEO blockchain consensus nodes.

It let us launch a fully working environment and debug it at the launch step of the PrivateNet.

According to official documentation, we have several ways to import a smart contract:

- NEO GUI

- NEO Python Prompt (cli)

NEO GUI is not suitable for the task of auto-import because actions must be repeated every time. Let’s try with NEO Python Prompt.

Below is the algorithm for importing a smart contract using NEO Python Prompt step by step.

write a simple smart contract:

now we have the smart contract and we’re going to copy it inside a container:

connect to the container using docker CLI

after this operation, we are inside the container and can interact with the cli-utility

now launch the cli-utility and try to import the smart contract:

We have figured out how to import a smart contract using NEO Python Prompt, but our task is to get a ready environment to interact with it, that means we need to do these operations every time, what is inconvenient. We need to understand how the np-prompt utility works, its source code is here.

After a brief study of the code, I can deal with the initial problem of compiling and importing code.

BuildAndRun function is responsible for compilation. It accepts arguments path (to a smart contract) and an open wallet

function is responsible for compilation. It accepts arguments path (to a smart contract) and an open wallet LoadContract, generate_deploy_script, test_invoke and InvokeContract are responsible for import

Naive code for importing a smart contract might look something like this:

This code is simple enough but it does not work as expected, because:

local blockchain should be synchronized before import

the wallet should be synchronized before import

For these purposes, the np-prompt utility uses the twisted package and the synchronization task runs in the background. You can see the final import script in our repository.

Thus, we solve the problem of smart contract auto-import, but this is only a part of the task. To solve the next part, let’s look at how we launch a container and its internal services.

Script run.sh looks like this:

It turns out that each consensus node of the NEO blockchain runs in a separate screen as a background process, that is why we do not know anything about what is happening inside the blockchain.

According to the documentation of the screen utility, we can redirect the output to a log file. Among other things, we also want to run auto-import and for this we change the script as follows:

I propose to describe the script and understand what we have done in the end:

[[ -p node1.log ]] || mkfifo node1.log — creates a named pipe in case it does not exist

( while read -r line; do echo “node1: $line”; done < node1.log ) & — when messages arrive in the log file, read them line by line and add the necessary prefix

screen -dmSL node1 -Logfile node1.log — run the screen in the background and redirect the output to a log file

The rest of the commands are the removal of the blocking session that we encountered at the beginning of the article, as well as an infinite loop.

To summarize, we already have two solved tasks that we have disassembled, it remains a small matter. Since we are in the test and debug environment, it makes no sense for us to store PostgreSQL data in the container, the alpine-ram image(which has PgSQL in RAM) is suitable. Also other things, we need services to know that the state of the container they depend on has changed and they can start working in normal mode.

Let’s look at Dockerfile of NEO Scan:

sleep 3 is incorrect because the docker has an excellent health-check system, let’s change that. We also have a large number of settings initially distributed in docker-compose.yml and Dockerfile, which are inconvenient for quick changes. I suggest to use an env-file, which we connect to each service and are able to fix it quickly enough if something happens. After all the above changes, docker-compose.yml looks like this:

In addition to docker-compose.yml, we need to fix the Dockerfile to run the NEO Scan:

Thus, we have:

configured Timezone

got rid of sleep, which we no longer need (remember about health-check)

Do not forget about the env-file, which simplifies the configuration of our environment:

We are almost done, and it remains to consolidate all this with Makefile:

Thank you for your time. With colleagues, we have prepared a full-fledged environment, which uses the basic image from CityOfZion and the developments from this article. What have you managed to achieve:

first of all, the automatic deployment of a smart contract with the specified deployment parameters

we found out more about NEO Python utility / np-prompt

we made a minimalistic environment that is easy to manage

we set up a fully working environment

PS: At the end of the article, I would like to describe some specific features of NEO Python, which sometimes get out of the rut

always enable config sc-events on to debug your application and/or smart contract

It allows you to see all the events that occur in the blockchain, as well as debug your code much faster

wait for blockchain synchronization before calling the smart contract again (approximately 15 seconds)

The np-prompt feature is that it connects to the other existing consensus nodes and synchronizes with them, so if you call repeated actions on a smart contract without waiting for synchronization, you are more likely to get an error message.

after your smart contract has been deployed, all calls require a full set of arguments

It may be a surprise, but every time you call smart contract methods, you need to pass all the arguments, even if you do not need the argument at the moment. You just need to get used to it.

For example, when calling a smart contract with second parameter empty (see the code above), you have to write the following:

All arguments that you pass to the smart contract using the np-prompt are treated as strings

This is a feature of the np-prompt implementation. The point is that the arguments from a user are passed to the cli as strings and then gradually sorted between commands and arguments. If you need exact correspondence of passed types, then use RPC.