The 3DES team’s task is to optimize the process of proceeding a large number of models using minimal resources. For this goal, we developed a decentralized computing capacities exchange for slicing tasks. While developing the system we assumed that the exchange can be scaled up for other tasks as well, besides only working with 3D models. This assumption was confirmed when we developed and tested the MVP-version of the product.

Details about the MVP development:

The first product version is a centralized service for 3D-models processing and preparing for the 3D printing. You can take a look at its work here: https://mvp.3des.network

Although the database and the controlling module are totally centralized, we also aimed to test the decentralized technology computing capacity. That is why there were the following requirements for the worker module, which is responsible for completing the actual tasks:

Maximum utilization of the production capacities with minimal loses on the additional costs.

Fast and easy software code delivery to the worker.

Universal solution: we have to able to launch the worker on different operating systems and different hardware.

Easy control interface.

We have done a research to achieve the needed result and came out with 3 main options to solve the problem:

Development and distribution of installation scripts

Packing the images of virtual machines

Containerization

The installation files are the most classic solution to this problem. We refused it right away because it is the most difficult and unstable regarding the problems arising during the installation or updating the software product.

The virtual machines create an isolated space that allows avoiding the installation problems, but it has a significant capacity loss or requires the use of expansive solutions, which annihilates any commercial interest in the project. Moreover, the image of virtual machines is very large in volume, that makes the code delivery to the worker longer.

This way, we came to the containerization. The uniqueness of this approach is due to the virtualization of the processes and services level, not on the level of a whole operating system. It allows creating a totally isolated environment to complete the software product, although the work itself is done within the worker’s host core. This approach minimizes the capacity loss of virtualization.

At the moment there are several solutions in this area, but the most popular and stable is Docker. Our team chose it as a containerization instrument.

Docker

You can find the detailed information about Docker on the internet: https://ru.wikipedia.org/wiki/Docker or on the official website: https://docs.docker.com/engine/docker-overview/.

We’ll describe the reasons why it fits our goals:

Productivity.

There have been several studies of Docker’s work productivity. Here are the examples:

HTTP://domino.research.ibm.com/library/cyberdig.nsf/papers/0929052195DD819C85257D2300681E7B/$File/rc25482.pdf

https://www.percona.com/blog/2016/02/11/measuring-docker-io-overhead/

https://www.percona.com/blog/2016/02/05/measuring-docker-cpu-network-overhead/

They show that:

There are no losses during writing/reading data;

The CPU losses are insignificant;

The network interface losses are insignificant as well under some configurations depending on the task.

Also, the Docker work principles allow to scale up the containers in any way under one or several hosts. All of these factors allow saying that this technology helps us to achieve the maximum production capacity with minimal losses on additional costs.

Images and registers.

The technologies Docker is based on allow to create images separated in «layers». Every layer is an impression of the system condition after every change. The services like Docker Hub allow to distribute our images fast and easy, publicly or in a private group. The images on their own are light compared to the images of virtual machines. And for the software updated the workers have to download only the changed layers, not the whole image. At the moment it is the fastest and easiest way to deliver a software product to the user or, as in our case, to the worker.

Cross-platform.

Initially, Docker was compatible only with Linux, which was considered a problem. However, during the last few years of active development, it became widely spread among Windows and macOS users. Now all these platforms are actively supported by the Docker developers.

Interface.

The docker technology is very popular and the community supports it a lot. There are many documents, articles, and tools to work with it. To control the system you can use a console, web-interfaces and desktop applications. A very wide choice allows saying that we are covering almost all the workers’ requests.

The scaling perspectives.

At the moment the worker module is built on the linked technologies:

1. Docker

2. Celery

3. Slic3r

And a realization of central module API.

The worker’s container can be downloaded and launched on any computer in several copies. It is only necessary to enter the authorization data in the system and a certain configurations set.

Considering the platform’s opportunities, there can be developed several other tasks based on this container, other than slicing. It can be video processing, internet traffic exchange between the nodes of a private net, forming requests to a server, completing other abstract software code. In case of developing a system for a decentralized dynamic DNS-management, it is possible to create the decentralized servers that can be hosted on the 3DES workers’ machines instead of classic solutions like Amazon AWS or Digital Ocean. These solutions allow to lower the costs or protest from centralization problems, like servers’ IP ban by regulative organs.

Proceeding from the completed research mentioned above, we decided that the 3DES project has to be scaled up. Shortly we will search for partners to scale up 3DES and 3D models processing server to a universal system for a decentralized processing of different information. In other words, we are talking about a full version of a decentralized data-center.

Currently, we are negotiating with the international developers’ groups to expand the team for the project realization. We will keep you updated about the processes. Stay tuned!

Best regards,

3DES team