Introduction

In my previous article, Stress Testing the RaiBlocks Network, I reported that I was able to broadcast precomputed transactions at a sustained 33 transactions per second (TPS) and peaked at 120TPS on a $5/mo Digital Ocean node. This time around I improved the following:

Tweaked my stress test code to create more interesting patterns on rai.watch by precomputing both send and receive transactions Broadcasted from a more powerful computer Better TPS measurement tools

This article will be brief, primarily just reporting results. For a more in-depth article, please read part 1. In these tests, I report the TPS presented on RaiBlocks.Club as well as the peak TPS of a $40/mo (8GB, 4-Core) Digital Ocean Droplet; this Droplet is not broadcasting the initial transactions. These tests were performed on the RaiBlocks Main-Net in a real world environment.

20 Account, 500 Transaction Test

Real-time capture of 500 transaction on rai.watch; final svg

In this initial test, a total of 500 transactions were sent between 20 accounts. This was done on a $40/mo (8GB, 4-Core) digital ocean droplet.

The test results are as follows:

Number of Transactions: 500

Broadcast Length: 5.76 Seconds

Average Broadcast TPS: 86.8

Remote Peak TPS (1S Average): 263 TPS

RaiBlocks.Club Peak TPS (5S Average):187.2 TPS

100 Account, 5000 Transaction Test

4x speed up (but really ~30% speed up because of lag) capture of 5000 transaction on rai.watch; final svg

In this primary test, a total of 5000 transactions were sent between 100 accounts. This was done on a $160/mo (32GB, 8-Core) digital ocean droplet.

The test results are as follows:

Number of Transactions: 5000

Broadcast Length: 47.28 Seconds

Average Broadcast TPS: 105.75 TPS

Remote Peak TPS (1S Average): 306 TPS

RaiBlocks.Club Peak TPS (5S Average): 172.8TPS

Conclusions

In this experiments, we showed that the network was able to sustain 105.75 TPS and the some nodes experienced a peak of 306 TPS. During the testing period, transactions processed normally on the network; the network was not saturated. Now that I removed the processing bottleneck for broadcasting, the next step would be to simultaneously submit transactions from multiple nodes. A single node can broadcast faster than I present in this article, but would require significantly more complex stress-test code.

Like stuff like this? Support me at:

xrb_1y6fjssau9mhmnprwfxemfnahz759tx7qrdrfz7kbzd4jbkd4mgrurq7tfmf