It’s been a busy last couple of months for DEXON. And for this dev update, we want to focus on the first simulation results.

DEXON simulation results

Environment parameters:

We implemented the DEXON consensus algorithm in the Go programming language on an Intel Core-i7 2.9 GHz (7th generation) CPU with 16 GB 2133 MHz DDR3 RAM by Go programming language.

Note: If you haven’t read DEXON’s consensus algorithm paper, you may not understand the experiments in this article. Please read the paper first.

Experiment with Byzantine nodes:

The following figures show the influence of Byzantine nodes. When κ = 0, even one Byzantine node can affect the latency of every node because normal delivery operates most of the time. When κ = 1, 2, the figure shows it remains close to constant which means Byzantine node cannot affect the latency of other nodes.

With 0.2s proposing time

With 0.2s proposing time

|N | = 19, the proposing time is 0.2 seconds and network latency is 0.25 seconds

With 0.5s proposing time

With 0.5s proposing time

|N | = 19, the proposing time is 0.5 seconds and network latency is 0.25 seconds

Experiment on network latency and proposing time

According to the design of DEXON’s consensus algorithm, we can anticipate the transaction latency formula as follow:

latency = 2Ttransmit + Tproposing + Ttotal ordering since the total ordering time is quite low, say, less than 0.1 second when |N | ≤ 30.

We can see the transaction latency would be affected mainly by the network latency and proposing time since Ttotal ordering is negligible at this point. We simulate different network latency and proposing time to see how they affect the transaction latency. The result proves our thought and is shown in following figures.

With proposing time 0.5s

With proposing time 0.5s

Experiment on network latency with Byzantine nodes, |N | = 19, and the proposing time is 0.5 seconds

With network delay 0.25s

With network delay 0.25s

Experiment on proposing latency with Byzantine nodes, |N | = 19, and the nNetwork latency is 0.25 seconds

Without byzantine nodes

The throughput is proportional to the number of nodes in the system since the total ordering is non-blocking and each node can operate parallelly. The results of our simulator are shown in following figures, which prove our estimation.

Experiment on network latency

Experiment on network latency

Experiment on network latency without Byzantine nodes, the proposing time is 0.5 seconds and network latency is 0.25 seconds

Experiment on BPS

Experiment on BPS

Experiment on throughput without Byzantine nodes, the proposing time is 0.5 seconds and network latency is 0.25 seconds.

Experiment on fail-stop with NACK

Experiment on fail-stop with NACK with Byzantine nodes, |N | = 19, the proposing time is 0.5 seconds and, network latency is 0.25 seconds. There are 6 Six fail-stop nodes fail and stop at the 15th second.

Experiment on fail-stop with NACK

It proves the influence of fail-stop nodes. The x-axis is time and the y-axis is the average number of the blocks confirmed by each node, that is, the total number of blocks output by the total ordering algorithm divides divided by the number of correct nodes.

We set 6 six fail-stop nodes among 19 nodes and the fail-stop nodes stopped at the 15th second. On average, the system could be expected to output 2 two blocks per node per second. When the fail-stop happened, every all cases were output either by the NACK mechanism or with the early delivery condition. The κ = 0 case has almost zero early delivery rate. Thus, it is only produced output by the NACK mechanism. On the other hand, the κ = 1 or 2 cases have had high early delivery rates and keep maintained their output rates.

Conclusion

The network latency and proposing time are the two main parameters that affect the latency of blocks.

The throughput is proportional to the number of nodes in the system and inversely proportional to the proposing time interval of the each node because the total ordering is non-blocking.

The simulation experiments confirmed which prove our theoretical estimation.