Orax was the first mining pool created for PegNet. The software was built from scratch and tailored for the specific features of that network. This article presents the overall architecture of our system and some improvements plans for the future.

This article assumes a basic understanding of how PegNet mining works, if you are not up to date you can read the official wiki.

Note this article doesn’t cover any security aspects of our architecture.

Architecture overview

The complete Orax system architecture

Here is an overview of the different components of the system:

orax-orchestrator : this is the central piece of the system, it builds Oracle Price Records (OPR), broadcast mining jobs to all the miners, collect their mining work and submit entries for a chance to earn PEG tokens. It was written in Golang for performance and concurrency handling reasons. We dedicate a section to the orchestrator further down this article.

: this is the central piece of the system, it builds Oracle Price Records (OPR), broadcast mining jobs to all the miners, collect their mining work and submit entries for a chance to earn PEG tokens. It was written in Golang for performance and concurrency handling reasons. We dedicate a section to the orchestrator further down this article. orax-cli : this is the application ran by miners. It connects to the orchestrator to receive mining jobs and send back its work. The bi-directional communication relies on a simple protocol we developed based on FlatBuffers messages over WebSocket. The client is written in Golang (easier to integrate with the orchestrator and yields good performance). The client is also the entry point to register new accounts and miners and allows running performance benchmarks.

this is the application ran by miners. It connects to the orchestrator to receive mining jobs and send back its work. The bi-directional communication relies on a simple protocol we developed based on FlatBuffers messages over WebSocket. The client is written in Golang (easier to integrate with the orchestrator and yields good performance). The client is also the entry point to register new accounts and miners and allows running performance benchmarks. REST API: we have a classic Node.js REST API backend that handles requests from our website and some non-critical commands of the miner client. This is the gateway to our data stored in our database.

we have a classic Node.js REST API backend that handles requests from our website and some non-critical commands of the miner client. This is the gateway to our data stored in our database. Website : our website that serves public information and tools as well as our users area where they can track their rewards, miners and payments. It’s built on Vue.js + Vuetify 2.

our website that serves public information and tools as well as our users area where they can track their rewards, miners and payments. It’s built on Vue.js + Vuetify 2. Database: we opted for a MongoDB replica set.

we opted for a MongoDB replica set. Factom nodes: we maintain a fleet of Factom nodes connected to the decentralized Factom Protocol network (the underlying blockchain of PegNet). It is critical for the pool to be able to read and write data into the blockchain. Because distributed systems can be capricious it’s important for us to have access to multiple nodes for redundancy.

we maintain a fleet of Factom nodes connected to the decentralized Factom Protocol network (the underlying blockchain of PegNet). It is critical for the pool to be able to read and write data into the blockchain. Because distributed systems can be capricious it’s important for us to have access to multiple nodes for redundancy. Monitoring and alerting: we use Prometheus to instrument the orchestrator and Grafana for visualization and alerting. We are very satisfied by this great open-source combo.

The orax-orchestrator, the backend API and the website are all fronted by Nginx proxies.

Zoom in on the Orax orchestrator

Orax orchestrator internals

Contrary to most PoW coins, PegNet miners don’t compete to be the first to find a new block. They use their hashing power to “vote” for the oracle price of different assets. The mining time is fixed by the block time of the underlying blockchain Factom. As a consequence the state of the Orax orchestrator changes based on blockchain events (new block, end of block…) and follows a sequence of predetermined steps (build OPR, send mining job, collect shares…). So it’s naturally that we modeled the core of the application around a workflow component.

The second critical component of the orchestrator is what we called the “share collector” which is constantly connected to all the pool miners over WebSockets. It is responsible to continually receive and validate all work sent back by every single miner. The verification involves computing the LXR Hash of the data which is a very slow operation and prompted us to spend extra time to optimize this code path. Golang goroutines were invaluable here to easily process shares concurrently.

Scaling the orchestrator horizontally and geographically

At our present scale the current design is working very well, and we could scale vertically (migrate to a server with more CPUs) to accommodate quite some growth. Yet we are already planning the next iteration of our design: we want to extract the logical “share collector” component to make it an independent micro-service that can be deployed in multiple regions of the world.

Multi share collector architecture

That refactoring would kill two birds with one stone:

Geographical proximity: Orax pool has miners literally all around the world, and many of them are individuals mining from home. That means their home connection to our server located in East Canada can be slow and unreliable. By having share collectors located in strategic regions of the world we can get closer to our miners and guarantee a better service. The share collectors will then leverage better quality networking to communicate with the central orchestrator.

Horizontal scaling and redundancy: that micro-service architecture enables horizontal scaling as we can spin up as many share collectors as we wish. Share collectors communicate with our miners and are the bottleneck to take on more load. By also adding a fail-over mechanism (not described here) we would also be able to increase the overall resiliency of our system by being able to redirect miners to another share collector in case of failure of one of them.

The main drawback of this new design is increase of the overall complexity of the system and increase of the infrastructure cost.

Conclusion

Designing and implementing the whole Orax stack has been an extremely rich and exciting experience. We have laid out a strong foundation and have anticipated future growth in our current design. To date we have gained the trust of hundreds of miners around the world who regularly praise how user friendly the whole mining experience with Orax is. We will continue to strive to provide the best mining service to all PegNet miners.