iExec V3 is a major breakthrough for iExec, marked by significant advancements in technical features and new services for doing business. But with these major upgrades, came more stress for the original ‘legacy’ middleware supporting the iExec ecosystem. After our first public infrastructure tests in 2018, the decision was made to rebuild the middleware, which came as no easy task. Read on for the full story on why this middleware upgrade was so important and on how we achieved this major swap.

In case you missed it, read our post about what is to come for the next version, iExec V3:

Recap

As you should know, our product allows to trading cloud computing resources such as applications, datasets and computing power to meet requests on the decentralized iExec Marketplace. This post focuses on the middleware which is tightly related to the computation process. This process is powered the off-chain computation, while the trust, correctness and audit are guaranteed by the blockchain Proof-of-Contribution consensus protocol (‘PoCo’ for iExec lovers).

In general, the off-chain computation is divided into multiple worker pools, each of which is composed of two components:

The workers, the machines in the pool, waiting for task requests coming from the blockchain (hungry to earn RLC tokens!) The scheduler (that manages tasks in the worker pool in exchange for RLC tokens)

Thank you XtremWeb!

In the beginning, the off-chain computing part of iExec was originally made possible thanks to the XtremWeb middleware. XtremWeb is an Open Source Platform for Desktop Grid Computing released in 2005 by the current iExec CEO (Gilles Fedak) and CTO (Oleg Lodygensky) while working as researchers for INRIA and CNRS.

The original idea was to offer a platform allowing the voluntary sharing of computers with the aim of solving research problems.

http://xtremweb.gforge.inria.fr/introduction.html

Since 2013, Gilles, Oleg and Haiwu studied monetization of volunteer computing resources, aggregated by XtremWeb, in a French National Research Agency founded project (http://www.agence-nationale-recherche.fr/en/anr-funded-project/?tx_lwmsuivibilan_pi2%5BCODE%5D=ANR-12-EMMA-0038). For several reasons it appeared to be unrealistic at that stage until they “discover” the existence and advantages of crypto-currency.

And so, iExec was born! Since then, V1 and V2 iExec releases have been based on this middleware. In these two versions, the original XtremWeb was extended in order to deal with task requests triggered by the blockchain and achieve crypto payments securely. In other words, we adapted it by adding Web3 connectors which allowed it to interact with the Ethereum blockchain.

XtremWeb: Imagine an old car with super tires, a spoiler and nitro engine

Despite being well suited for distributed computation, XtremWeb was not blockchain-friendly by design. This caused new and complex specific cases whenever trying to introduce new features to the iExec stack (such as Proof-of-Contribution, order management, …) and interfered with existing ones. All of that led to a rapidly increasing technical debt.

iExec Core is born!

For these reasons, in mid-2018, the decision was made to rebuild the middleware from scratch. The key was to keep the essence of the original workflow, which has been running like a charm for many years in the volunteer sharing era. iExec specific features within a new middleware, ‘iExec Core’. This included a new architecture, a standard protocol, a dedicated task workflow, a logic separated in services and components as well as a clean DevOps pipeline.

iExec Core: meet the brand new and improved middleware!

(Credit to Maxxar on Slack for the Blendr rendering powered by iExec)

Features

Similarities with the old middleware?

Considering the wide spectrum of languages and applications in the software area, it was key to choose a generic computation abstraction (apps) for two reasons. On one hand developers willing to monetize their app need a standard for packaging their app (think about .apk apps ‘Android Package Kit’ on the Android platform) and on the other side this app needs to be compatible with many workers running on top of different architectures and OS. We have at that point our best candidate: Docker containers.

An additional requirement is a way of executing tasks on the iExec platform. One way is to directly target Worker resources by sending them incoming tasks — this is called the Push Mode. The other way is to give a task (if there is one) to a Worker when he is asking for — this is the Pull Mode. Past research and experience from the team have proved that the latest ‘Pull Mode’ is the most efficient, mainly because it enables to bypass firewalls or NATs and it lets the worker decide when and how he wants to work.

Finally, another strong requirement inheriting from the user experience we have with the previous middleware is to not require any privilege or any network configuration tweak on resources of worker pool managers and workers. We want to keep your computer clean!

Worker pool infrastructure

What does the worker pool organization with the new iExec Core middleware look like?

So what’s new?

Protocol

To interact with an iExec Core Scheduler, being a Worker or any web front interface, it is important to be able to interact with it easily. The chosen way to talk to the Scheduler is by accessing its REST API accessible by HTTP. Any call results in a JSON response. The usage of an HTTP+JSON standard allows to the Scheduler to be cross-client compatible, meaning you could build your own Worker in any language you want.

In addition to HTTP requests from workers to schedulers, a mechanism of publish/subscribe is now implemented so that each worker can subscribe to a task topic it is involved in. This publish/subscribe feature speeds up task completion, avoiding the Worker to constantly ‘pull’ for task changes, avoiding overload for the Scheduler. Workers still pull the Scheduler for new tasks, but the Scheduler could also send them requests for specific actions such as aborting a task or uploading a result. This accelerates the whole process.

Consistency

Solving network disconnections and Ethereum node syncing issues is a very difficult task. Network disconnections are monitored inside the iExec Core protocol with dedicated statuses, some ‘blockchain’ detectors are here to prevent to be stuck in edge cases due to Ethereum node syncing issues. In the same way, state validator components are asserting a data consistency between onchain-offchain states for tasks, replications, and workers.

Crypto & Blockchain

Access controls on the middleware are now fully compliant with Ethereum identities. The only way to interact with the Scheduler is by resolving a crypto challenge based on signatures and Ethereum key pairings. This enables any worker pool manager to filter and to protect the worker pool from malicious external requests. We also solved Ethereum transaction issues to ensure all transactions are properly processed by the middleware. Like several other actors in the community, we had some nonce issues with Ethereum transactions sent by different threads in the previous middleware. The iExec Core Scheduler and Worker have now a simple blockchain transaction manager, making it impossible to send two transactions with the same nonce and guaranteeing the transactions are properly executed.

The new middleware made improvements related to ETH gas costs. Each smart-contract function is now called with a dedicated gas limit, avoiding the use of high ‘default’ gas value in the case a smart-contract function requiring small amounts of gas is rejecting your transaction. This allows to choose between transaction speed and price.

Deployment & Management

The deployment of a Worker or a Scheduler is now much easier, using lightweight Docker images. As a default, you can now use have a ‘Docker run one-liner’ for the Worker and a convenient ‘Docker Compose’ to run a Scheduler, its attached Mongo DB and Grafana for monitoring.

Many changes have been made to the iExec stack smart-contracts (Proof-of-Contribution, iExecHub, iExecClerk etc), enabling more flexibility for those monetizing their dataset, app or worker pool. The iExec Core Scheduler is fully compatible with it and the API offers live metrics of the worker pool for order management. These metrics are ready to be consumed by any kind of order management tools you want to build (e.g. automatic scripts) for the purpose of publishing/canceling workerpool orders according to the activity of alive workers.

Development process changes

Along with the changes made in the development, this dev letter introduces tools that help us to develop better and faster every day. Most of them are just good practices or simple tools which are standards across the industry and might suit you for some business cases.

Frameworks

The previous middleware didn’t have many dependencies with any framework. It was very stable but was implying a lot of maintenance for some code that is now available in popular frameworks. Here is a small list of the main ones we use: Spring (https://github.com/spring-projects/spring-framework) and Spring boot (https://github.com/spring-projects/spring-boot) help us build the necessary components. Querying an Ethereum node from our Java components is made with the excellent Web3J library (https://github.com/web3j/web3j). This abstraction helps us a lot in producing clean and simple code. In another hand, Lombok (https://github.com/rzwitserloot/lombok) helps us to reduce boilerplate code along with libraries such as for Docker client for java (https://github.com/spotify/docker-client) or multi-formats and IPFS libraries.

Database

Rather than MySQL in the previous middleware for the database layer, we decided to switch to MongoDB. It is scalable by design (which could be natively enhanced with shards), and integrates perfectly with Java and Spring.

Continuous Integration and Delivery Pipeline

A continuous integration and delivery pipeline to ensure the software is properly tested and released is a key point of the development process. Defining and implementing such a pipeline is also one of the top priorities for iExec V3.

Gradle is used to build and unit test our different modules at each commit. Every successful build in the master branch will generate a Docker image. This docker image will be tested in a set of multiple scenarios. This set of integration tests is performed daily where all ‘standard’ failure cases are checked. Each night, a workerdrop, similar to previous iExec workerdrop infrastructure tests, is automatically triggered to simulate real-world scenarios where many workers are involved in incoming tasks. These tests ensure there is no regression when adding new features in the code. Once a Docker image has passed all those steps, we are pretty confident about it and it can be deployed on public testnets.

Private chains

Custom tools are needed to test the iExec Core at a higher level than simple unit tests. To do so, an Ethereum private chain is built for each commit done in the PoCo consensus algorithm. Those chains — running over a PoA consensus- are built with different blocktime periods, making sure any potential delay in the blockchain wouldn’t impact the middleware. These same Ethereum private chains are used to perform the integrations tests described above.

Coding practices

Following standard practice in the industry, all the code is at least reviewed by another member of the team. This prevents a lot of issues before any merge and helps to deliver smarter and cleaner code. Linters, such as Sonarqube, are used in our IDEs to catch minor ‘code smells’ before committing. Performed automatically, these daily reviews detect potential bugs and security vulnerabilities.

Our Github repositories for iExec V3 have been made public. We use Github features to keep track of all pending issues and allow anyone in the community to raise an issue they have found or leave suggestions for improvements.

Keep an eye on the progress and contribute!

Please stay tuned for the next big features of the iExec Core until V3, such as IPFS support for developers & requesters and a full Trusted Execution Environment (TEE) workflow for dataset providers. Feel free to contribute! Open an issue on GitHub, we’re always looking to reward those who can contribute with feature requests, potential application ideas or even simple feedback!

Links

The main Github repositories for the iExec Core are:

Read the docs for iExec: