Every successful product goes through several standard stages: it starts with the development and launch of the first version, then the testing stage begins, where the product is tested on the real customers, and after this testing is when the product goes to the mass adoption stage.

SONM follows the same path. Initially, we developed the first version of the product and launched it at the end of July 2018. Then we tested the platform on real customers and got the first metrics that look very optimistic:

SUPPLY

6,000 Enterprise Servers 5,000 GPUs 34 Countries

DEMAND

117 Unique Customers 440,000 Computing Hours

You can find more information in the “SONM Platform — a half year checkpoint” blog post.

The first 100+ customers provided us with helpful feedback, which significantly influenced our plans for the further development of the platform. We clearly realized the components of the platform that needed to be improved on and which new components need to be developed for accelerating the mass adoption.

Now, the testing period is officially completed, and the next line of action is the mass adoption & monetization stage.

In this publication, we would like to focus on the 3 most important points:

Short-term plans (precisely why and what we are going to develop in 2019); Long-term vision (how are we going to revolutionize the platform in the next 3 years); A teaser of our monetization model.

But before you move on to the major points of SONM’s future, we need to clarify the main idea behind the platform.

What is the SONM platform?

As you all know, SONM is building an Uber-like platform that works much like other spot markets: people and ﬁrms with excess computing capacity can put it up for sale, and those with a need for that capacity can bid on it.

So, like many other companies utilizing the sharing economy model, we are going to build a two-sided platform:

The distributed infrastructure of consumer devices For millions of device owners across the globe, this side of the SONM platform makes it possible to establish a passive income stream. Computer owners list available resources to a distributed network, turning them into computing nodes, and get paid for performing useful computations. This is similar to the Uber app for drivers, which allows both private drivers and taxi companies to join the platform. Software tools on top of the distributed infrastructure This side of the platform helps customers to rent the most relevant resources and run computations automatically. This is very similar to the Uber app for riders, which ﬁnds the most relevant driver for your request.

Building such a marketplace is like building two products simultaneously, each one dependent on the other. Therefore, we have divided all the development plans described below into two sides: the side of suppliers and the side of consumers.

We will carefully balance features and offers on each side and constantly experiment to get the balance just right and find a path to growth.

1. Short-term Perspective (2019)

All services described in this section are currently under development. Some will be released in a few months, while others at the end of 2019. In the coming weeks, we are going to make a series of publications to tell more about each service, including estimates of development time.

1.1 Customer Side: turning Crypto-IaaS into a full-featured Cloud platform for startups and individuals

In the first half of 2019, we will introduce the first version of a true GPU cloud platform, designed for building, training and deploying machine learning models.

It is aimed at data scientists, novice AI specialists and nontechnical professionals who don’t have access to the expensive computing infrastructure that’s necessary for machine learning. The idea is to provide them with a set of tools for exploring data, training neural networks, and running GPU compute tasks, that are easy to use and can be paid for with a bank card.

This set of tools are built on top of SONM infrastructure as an abstract layer that hides the complexity involved in provisioning and managing containers for machine learning purposes.

This service will allow researchers to launch a GPU-backed Jupyter notebook and quickly start training the model using either a base algorithm from TensorFlow or other pre-configured popular frameworks, as well as simplify the process of learning models for non-professionals.

Another important point is that today, customers can rent computing resources on the platform only with SNM tokens (Crypto-IaaS). This is convenient for crypto enthusiasts, while companies are not ready to deal with tokens, so we incorporated a plan to provide customers with an option to pay with fiat money.

A bit later, we will launch a similar ‘plug & play’ service for video rendering, designed to help freelancers and small studios to render scenes in just a few clicks. The service is aimed at customers who want to reduce rendering costs or need additional computing resources at those moments when the rendering farms are busy and can’t perform the required task.

For any other purposes, this service will provide a way to run custom containers on the required devices, with specified network bandwidth, in the country of customer’s choice.



1.2 Supplier Side: securing the computations on consumer devices to get enterprise clients

The next stage of SONM platform development will include building a fully-managed enterprise GPU cloud platform with a TEE (trusted execution environment) for securing computations on consumer devices.

Data processing on any third-party solution, whether Cloud, Edge or SONM, can have privacy, security, and legal implications, especially when dealing with sensitive data that the private companies could not disclose for reasons of commercial interest, or security and privacy in the case of the public sector.

To deal with the data security issue and help enterprises adopt the SONM platform, we are developing the next version of SONM OS, designed to not only to help suppliers deploy a worker but also to prevent the device owner from decrypting and copying customer data.

After installing the SONM OS, the supplier loses root access to the device (i.e., any console or remote access) and can’t monitor customer data or change the computation results.

SONM OS will require an explicit installation mode to modify the typical Linux distribution and revoke any possible access to the console from the machine owner.

During the entire computation process, SONM OS will perform some checks aimed at device attestation: to ensure the OS installation is genuine and not modified, as well as any access to the GPU, CPU, RAM or disks, is denied.

Built-in integrity checks establish and maintain the root-of-trust (e.g., detect unauthorized changes to the operating system, prevent memory reading with curtaining and so on);

Dynamic checks establish a secure remote session with the SONM command server and get instructions on how to prove host trustworthy. This process is necessarily performed at the time of initial machine attestation and randomly repeated at the computations stage, thereby ensuring the machine remains untampered with.

All suppliers who install the SONM OS in TEE mode and pass all checks are marked as working in trusted mode and get an increased reward. Customers can choose computations on regular nodes at a base price, or pay extra for computations on the trusted layer.

Most of the team started to work on TEE from the beginning of 2019, and we made significant progress in this area, but there is still a lot of work ahead to make a large project like this happen.

We are getting ready to release the technical paper surrounding the TEE design and the feasibility of the proposed approach. Hope to share it with our technical community within a month.

2. Long-term perspective (next 2-3 years)

Let’s take a deeper look at what will drive both customer and supplier sides of the SONM platform in the coming years.

2.1 Customer side:

We see further development of the customer side in creating the next-gen cloud computing service, focused on lowering the price of GPU-related computations and optimizing the transmission of “raw” data.

Having hundreds of thousands of computing devices everywhere, SONM can dynamically change the data processing location and distribute the computations on nodes located nearby; e.g., within the local network of the same Internet provider as the data source.

You can think of it as a traffic orchestration for scaling, cost-optimization, optimizing location to better serve the demand, or reducing latency and bandwidth strain for high-traffic tech like IoT/AR/VR, which generate “heavy” data (non-entertainment images, raw video streams from sensors and cameras, embedded data, and so on).

Orchestration tool for managing computations on clusters of unreliable consumer devices

Having launched the initial version of the Resource Allocation System (RAS) – middleware for intelligent management of equipment connected to the platform – we are now entering the next development stage, aimed at turning this simple infrastructure management tool into a planet-scale orchestration system.

As mentioned in our previous update, RAS includes a set of components that help customers automatically rent the required number of instances and run computations.

In other words, today SONM customers can easily rent computing resources in just a few clicks but have to manually deploy, manage and monitor a pool of running containers on rented devices.

If the customer has ten containers and one application, it will not be that difficult to manage the deployment and maintenance. If, on the other hand, the customer has 1,000 containers and 400 services, management gets much more complicated.

Moreover, each device connected to the SONM is controlled by a person or organization whose actions can’t be predicted and fully controlled by a platform. As a result, no single node can be considered as reliable, since it may shut down, reboot or lose its state at the moment of data processing.

SONM plans to overcome the issues described above with a new release of the Resource Allocation System, comprised of an additional infrastructure management module named the container orchestration system. It’s a special tool that automates the deployment, management, scaling, networking, and availability of containers.

How does container orchestration work? When you use a container orchestration tool, you typically describe in a configurations file where to gather container images, how to establish networking between containers, how to mount storage volumes, and where to store logs for that container. Containers are deployed onto hosts, usually in replicated groups. When it’s time to deploy a new container into a cluster, the container orchestration tool schedules the deployment and looks for the most appropriate host to place the container based on predefined constraints (for example, GPU or memory availability). Once the container is running on the host, the orchestration tool manages its lifecycle according to the specifications you laid out in the container’s definition file. It also means you don’t need hardware fault-tolerance since the orchestration system automatically runs a copy of the container on the new node if for any reason one of the performing nodes fails. https://blog.newrelic.com/engineering/container-orchestration-explained/

In the last couple of years, the tech industry seems to have adopted Kubernetes as the de facto standard for container orchestration. It’s the flagship project of the Cloud Native Computing Foundation, which is backed by such key players as Google, Amazon Web Services (AWS), Microsoft, IBM, Intel, Cisco, and RedHat.

However, Kubernetes, as well as every other orchestration software, is mainly designed to operate in high-performance Local Area Networks of traditional data centers and is poorly adapted for geographically distributed infrastructure.

Therefore, we are going to adapt the Kubernetes orchestration core for operating in a distributed environment and train it for selecting compute nodes by taking into account the possible network constraints or noticeable delays between nodes.

We plan to be the first to implement orchestration with the intelligent allocation of resources on distributed nodes. There are similarities to Google Kubernetes Engine in some aspects, but with additional options designed to deal with the infrastructure of unreliable consumer devices.

Because of the underlying technology (Kubernetes + Docker), consumers with existing containerized services will be able to switch to SONM without additional costs and use existing standards – SONM platform will use the same technology stack as the existing cloud computing providers.

Geo-targeted computing to meet the compliance

By mid-2018, 34 countries (and counting) had locked data behind their borders. In Russia, for instance, a new law requiring that “the accumulation, storage, and processing of personal data of Russian citizens, must be held on data centers located in the territory of the Russian Federation.”

For example, Linkedin considered it economically unwise to comply with this law and was banned in Russia, making it lose a market with 140 million users.

140 countries already regulate data processing and storage with data privacy/protection laws

To help customers meet compliance, SONM will introduce an option to choose jurisdiction for data processing. Distributed nature of SONM infrastructure enables “building” a virtual data centers at any point of the globe and performing computations in the specific country, region, city or even a specific geographic location required by the customer.

Here are just a few examples to illustrate the point:

Rent computing resources in advance (the futures) around the stadium for live broadcast on match day.

Process and store data in a specific country to comply with local data protection laws.

Protect service from DDoS or single point of failure (spread functions and content among thousands of nodes in various countries)

Hyperlocal computing for high-traffic tasks

For instance, if Europeans start watching YouTube videos directly from servers located in the USA, the amount of transferred traffic will exceed the bandwidth of all cables between Europe and America. In other words, just one website (YouTube) can overload all transatlantic networks.

This is not happening thanks to the CDN – videos are cached on European YouTube servers, and the Germans watch them from servers in Berlin, the Dutch from servers in Amsterdam, etc.

Now imagine that home smart security cameras become popular and are installed in every European household. Cameras monitor the house all day, analyze the video, and automatically decide to call the police if a robbery is suspected, or firefighters if a fire is detected.

Such real-time video analysis can’t be cached on local servers, and if the video stream from every European home is sent to US data centers for processing, the networks will be completely overloaded.

We plan to introduce a hyperlocal computing feature that will make it possible to allocate high-traffic tasks to the closest nodes (in the same region, city or even in the same provider network).

SONM will regularly ping all devices connected to the platform and build a network graph that indicates the network bandwidth, the number of hops and the latency between all nodes.

Based on this data, SONM orchestration tool will deploy computations to the nodes corresponding to the following parameters:

closest geographical location;

maximum network bandwidth between nodes;

minimum number of hops;

lowest ping.

It is expected that the demand for computing resources located closer to the data source will increase not only for launching services in low connectivity countries or rural areas but also for real-time processing of the flow of data generated by smart sensors, security cameras, and other connected devices.

2.2 Supplier side

Scaling the supplier side of the platform requires onboarding three groups: individuals renting out GPU rigs, mining farms, and traditional data centers contributing their idle capacity.

There is always some ingress and egress of supply as new suppliers come in and old ones go out, but good retention of existing suppliers may lead to competitive prices and improved resource availability. With these types of opportunities at stake, we will focus our main efforts on motivating suppliers to connect their devices to the platform (or to buy new devices specifically to connect to SONM, as many did in the days of the mining boom), on retaining existing suppliers and growing their lifetime on the platform.

Subsequently, we plan to launch a reputation system and some related tools aimed at cutting down the percentage of failed computations by rewarding reliable suppliers and fining unreliable ones.

All-in-one tool to manage and monitor rigs remotely

We expect hundreds of thousands of devices to be connected to the platform over the next 3 years, and a significant part of these devices will be provided by professional miners and farms with a bunch of rigs.

Having feedback from mining farms, we realized that one of the most important requirements for their joining the SONM platform would be the implementation of a device management system.

This led us to the development of a web-based system connected to the SONM OS and capable of monitoring rigs, tracking online statuses, GPU errors, temperature, power consumption and so on from a single dashboard.

In essence, this will be a web interface for remote access to all your devices with the SONM OS installed, where you can boost, reboot, stop or troubleshoot GPUs, manage and configure each rig individually or even change the wallet in use.

Resilience system with the elements of curation

Successful marketplaces need to meet a certain standard of service, but since anyone can join the SONM platform, we can’t control the reliability of each node. What we can do is to provide some curation in terms of minimum standards, quality checks before joining the platform and guidelines or non-negotiable rules.

At this point, we move on to the implementation of the resilience system where suppliers are constantly rated, checked and motivated to be reliable.



SLA tiering

Since SONM suppliers may vary not only by hardware and network characteristics, but also by reliability, availability, and responsiveness, we plan to oblige new suppliers to set the Service Level Agreements (SLA) they are ready to maintain.

We will offer suppliers to choose one of the predefined SLA tiers. The first tier requires maintenance of a high SLA (99.95% uptime, no more than 1 reboot per quarter, and so on) but brings a high reward to the supplier. Tiers 2 and 3 describes reliable hardware of various grades, while tiers 4 and 5 describes lower reliability that is mainly suitable for grid computing, micro-services, batch processing, and so on. Accordingly, the supplier reward at the lower tiers will be significantly lower.



Deposits and penalties

That raises the next question in the journey: what will be the steps taken in case the supplier violates the SLA conditions?

The idea is that SLA tiers will contain not only conditions but also penalties for their violation, in the form of a price per unit:

Price per minute of downtime in excess of the specified conditions;

Price per fact of reboot/shutdown/state loss over tier conditions.

And so on.

Depending on the tier, suppliers will have to deposit a certain amount of time locked SNM tokens. In case of violation of the SLA conditions, the supplier will be fined from this deposit, and will also receive penalty points in the reputation system, which will reduce the number of tasks that the orchestrator allocates to this supplier.

Below are a few examples of possible tiers to illustrate the point:

Tier

Normal operation Penalties Tier 1

Cloud-grade reliability.

Target tasks: Computing tasks which must not be interrupted by their nature (e.g., ML training cycle);



Not “cloud native” software (i.e., not adopted for single instance failures);



OLTP databases;



Mission critical software.

99,95% uptime (21 minutes of downtime per month);

No more than 1 unwarned execution state loss (reboot, shutdown) per month;

No more than 1 warned execution state loss (reboot, shutdown) per month;

No data partition loss per quarter. Uptime over 99% → 10% of monthly earnings;

over 95% → 30% of monthly earnings;

below 95% → 100% of monthly earnings. Monthly unwarned execution state loss 2 → 25% of monthly earnings;

> 2 → 100% of monthly earnings. Monthly warned execution state loss 2th → 5% of monthly earnings;

3th → 10% of monthly earnings;

4th and next → 15% of monthly earnings. Partition data loss → 100% of monthly earnings. Tier 4

Computing nodes that are available most of the time. Target tasks:

Orchestrated micro-services (single instance restart/shutdown is acceptable);

Grid computing;

Video rendering. 80% uptime (6 days of downtime per month);

Single online session required to continue at least 24 hours;

No execution state loss in the first 24 hours of the session;

No data partition loss per month. Uptime over 80% → 1% of monthly earnings per 1% downtime;

over 40% → 2% of monthly earnings per 1% downtime below 80%;

below 40% → 100% of monthly earnings. Execution state loss in the first 24 hours is counted and fined as downtime. The session is effectively void. Execution state loss after the first 24 hours is counted and fined as 12-hour downtime. Partition data loss → 100% of the monthly earnings. Tier 5

Seasonally available computing nodes (mining-style with no obligations). Target tasks: Orchestrated micro-services (single instance restart/shutdown is acceptable);

Grid computing;

Video rendering. No specific uptime requirements;

Single online session required to continue at least 8 hours;

No execution state loss in the first 8 hours of the session;

in the first 8 hours of the session; No data loss in the first 8 hours of the session;

No more than 2 execution state losses per 24 hours;

No data partition loss in the first 24 hours of the session. Uptime – no penalties (pay-per-actual-work-time model, but only after the session time exceeds 8 hours)

Execution state loss in the first 8 hours is counted and fined as downtime. The session is effectively void.

More than 2 execution state losses after the first 8 hours counted and fined as 24-hour downtime.

Partition data loss → loss of payment for a 24-hour session.

Since the connection speed, electricity supply interruptions and some other factors may vary depending on the country or even region, we plan to introduce region-dependent SLA tiers.

Tiers help marketplaces classify suppliers to create smaller crowds within the big crowd for the better supplier-buyer matching while enabling reliable suppliers to increase profits.

If rare failures are not critical for the task (Rendering, for example), our intelligent orchestration system will run computations on devices from lower tiers to save costs. Accordingly, if the customer requires zero downtime conditions (ML or general purpose containers, for example), the task will be allocated to suppliers from the first tier.



Reputation system

To build an algorithm that encourages good suppliers and automatically excludes bad ones from the platform, we will introduce a supplier reputation system, which at some point will be similar to the Uber drivers rating.

The platform will assign a reputation to each node. You can think of the reputation score as a rating of trust. The higher your score, the more tasks will be allocated to your device, and the higher the reward you’ll get. Once a reputation drops below the minimum level, your account will be deactivated if you fail to improve after multiple notifications. Suppliers will be able to reactivate their account after taking a “short educational exercise.”

The reputation score will be based on a combination of factors. Some factors will decrease daily (decay) and reach zero in a month, so past records won’t be able to keep you in the game, providing top-notch services everyday will. It also means that any new node can quickly “earn” a high reputation.

Some examples of potential factors that can increase or decrease the supplier reputation:

SLA compliance – every SLA violation lowers the supplier’s reputation while working without failures gives extra points.

Penalty deposit size – the more SNM is frozen on suppliers “penalty deposit”, the higher the reputation score. You can look “penalty deposit” as a reserve capital that guarantees a refund to customers if something goes wrong – the higher your reserves, the more platform trust you. It will be required to freeze at least 10,000 SNM to get additional reputation points for this factor. It will also act as a protection mechanism – in the case of violation or malicious behavior; the node loses frozen tokens.

Identification – the better the KYC tier you reach, the higher your reputation.

Data protection – partner nodes, as well as nodes that have installed SONM OS, will receive additional reputation points.

And so on.

As the supply expands, the platform will add functionalities that rewards suppliers who maintain a high reputation with better earnings.

Ultimately, we see it as a broader effort to motivate suppliers to keep their devices 24/7 online, never shut down or reboot the device when the task is performing, upgrade outdated hardware, improve connection speed and stability, regularly update SONM OS version and perform other useful actions.



Resource utilization system

The idea is to provide miners with a unique algorithm that automatically switches the supplier’s device connected to the SONM platform from performing the useful computations to cryptocurrencies mining and vice versa, depending on the profitability of each action at the moment.

This algorithm will help users manage resources to maximize profitability by dynamically switching between tasks so that mining will be performed only when higher paying tasks are unavailable.

Miners will not have to worry about the idle time of their devices connected to the SONM platform, as it will be utilized for mining, and the owner of the device will, in any case, get a reward comparable to the income from mining of popular cryptocurrencies or even higher.

Additionally, we will take into account all the supplier costs, and will not start mining or allocate any useful computations on his device if the estimated profit is lower than the cost of electricity (set by the supplier per its local prices).

3. Monetization model

Every spot market has to balance between suppliers and customers. On the one hand, suppliers should be satisfied with their earnings and not disconnect their devices from the platform. On the other hand, customers should have the lowest possible prices to choose in favor of our platform.

This means the important part of our three-year platform development strategy will be a pricing system that keeps everyone happy. Moreover, we are going to make performing useful computations on the SONM platform a more profitable way of GPU rigs utilization than mining the popular cryptocurrencies.

Fares and dynamic pricing

For each SLA tier, the platform automatically sets the minimum price per computing hour (base fare). This fare will dynamically change to keep computations on the SONM platform more profitable for suppliers than cryptocurrencies mining.

The base fare can be increased with an individual multiplier. This multiplier will be based on the current reputation of the supplier and can significantly increase his earnings. At the same time, the price for the customer will not change, since the additional supplier reward will be paid from the platform fees.

Additionally, when demand for resources outstrips the supply of devices, our dynamic pricing kicks in, increasing the price of a computing hour; hence increasing the earnings of all suppliers.

Dynamic pricing can be activated in certain countries, regions or a specific location where the platform is running out of computing resources. Also, temporary dynamic pricing can be activated globally for certain types of devices that are in high demand (for example, Nvidia 1080 TI).

Dynamic pricing has two effects: customers who can wait for computations often decide to wait until the price falls; and suppliers who currently mine some cryptocurrency, for example, switch their devices to the SONM platform to get a higher reward.

As a result, the number of customers requesting computations and the number of available devices come closer together, bringing wait times and prices back down.

Rental types: auction, on-demand, pay-as-you-use

We are going to introduce a system of auctions that will complement our existing on-demand renting model. To run the computations, the customer will need to identify the task objective and place a bid (i.e., specify how much he is willing to pay per computing hour).

The more customers compete for computing resources in a specific location or for a specific hardware type, the more expensive it will become. So the person willing to pay the most will get the best resources.

What factors determine the winner of a SONM auction?

1. Your Bid

Just like any other auction, the more you are willing to pay for the computations, the more likely your task will get the best resources.



2. Requirements

The number of additional requirements will directly affect the cost of your computations.



How is the bid calculated?

The task objective will determine what optimization and bidding options you’ll have. If your goal is to perform computations as soon as possible, SONM algorithm will offer you a higher bid, and the best instances will immediately start performing your task. If your goal is to find the cheapest resources, the proposed bid may be low, but the computations will take a very long time to complete.



How do requirements affect cost?

A traditional auction system works on a “highest bid wins” system, but SONM will also take into account the data privacy requirements, compliance settings, geo remoteness of relevant nodes, and many others. The more requirements you specify, the more difficult the Resource Allocation System is to find the devices for running your task; hence, the cost of your computations will be higher.

Rental Types

There are two ways in which you can bid/rent computing resources from SONM:

“Auction” which is normally used by customers with variable workloads that need automatic scaling, or high-traffic tasks like IoT, AR, VR, AI and so on; “On-demand” which is used more for renting bare devices for a certain period.

There is also the third option only available to enterprises willing to create a private cloud without huge capital expenses.

What’s the difference?

1. On-demand

On-Demand instances have a fixed price set by the supplier and run for as long as you want. This means you will run computations with a predictable price and controlled number of instances. You can choose the renting period and pay on the hourly or monthly basis.

2. Auction

With an auction, you specify the desired price per computing hour and then let our systems automatically find optimal devices close to the specified price (if possible), run computations and optimize/scale the number of instances, if necessary.

Takes into account: bid, required SLA, data privacy, computations security, geo-targeting, the need for traffic optimization (i.e., allocation of computations to the nearest nodes), estimated time to complete a task and so on.

What are the advantages of an auction system?

In this way, customers will adjust the limits for dynamic resource scaling – the cost of rented instances will always tend to the placed bid. By changing the bid, customers will be able to balance between the task execution time and its cost.

Imagine the customer places a task to train deep learning model:

Our resource allocation system can rent tens of enterprise-grade instances, with first-tier SLA, from suppliers with the highest reputation so that the computations will be performed in a short time, but the cost will be prohibitive.

In the opposite case, the system can rent several cheap devices, but the computations will take a very long time to complete.

The key idea is to provide computing resources for everyone. We want even the task with the lowest bid to get some small amount of low tier instances and to run the computations, rather than waiting for the available resources to appear.

Pay-as-you-use model

Additionally, customers can use our “pay as you use” model, when they don’t buy capacity in advance but are charged based on real consumption. This billing approach became possible with the introduction of containers at the SONM platform, which creates more flexibility based on how big your load is at any given moment.

Each container can be scaled vertically on-the-fly, taking into consideration the load changes at the current moment. So you pay for actual consumption and don’t need to make complex reconfigurations to align with project growth.

Bottom line

Summarizing everything above, SONM developed the initial version of the platform and tested it on the first customers able to pay with SNM tokens.

Further development strategy is aimed at the following key stages:

Stage 1

The first version of the true GPU Cloud with an effortless infrastructure for Machine Learning and Data Science. Customers will be able to pay with fiat money and get a simple interface for renting computing resources, with easy-to-use preconfigured containers for ML purposes.

Stage 2

The next version of GPU Cloud will make computing on consumer devices more secure, to accelerate adoption of the SONM platform by small and medium-sized organizations with ML, Big Data or Video/image rendering tasks.

Stage 3

Next, we plan to systematically release the features described above, with the goal of GPU Cloud to meet the needs of customers from multiple industries:

Intelligent agriculture – automatic plant growth analysis through field image processing requires computational resources closer to the data source, to save the cost of transmission data from rural areas to a remote data center.

Intelligent manufacturing – manufacturing industry is expected to increase the demand for hyperlocal computing, used for on-the-fly decisions by sensors and industrial robots.

Industrial 3D modeling – the automotive industry is expected to make an increase in the use of 3D modeling technology for constructing parts of the vehicle and CFD simulations.

Connected devices – SONM servers can offload computing tasks from smart devices by on-the-fly processing, caching/ storing information and acting as a private cloud that can be accessed remotely.

And so on.

Stage 4

One of our last steps is to launch an enterprise solution designed for large entities that can’t adopt our platform for both security and technical reasons. We will offer these customers 2 options:

To rent a cluster in the SONM platform, with the guarantee that these nodes will be isolated on a separate subnet and the tasks of other customers will never be allocated to these nodes (i.e., the distributed version of the Virtual Private Cloud)

Buy a license and deploy a full Private Cloud on the company’s internal resources (office computers, own servers, etc.). This solution is aimed at companies that would like to run a Private Cloud without huge capital and operational expenses.

After a short period, we plan to introduce a very similar document with detailed information about our future business and monetization model, pricing system, and go-to-market strategy, with an action plan for attracting both suppliers and targeted customers.

Stay tuned!

Q&A

Does this mean that the Masternodes won’t be released?

Let’s clarify an issue that concerns many of our community members. Part of our team is now working hard on the Masternodes.

Unfortunately, the Masternodes will not be released in the Q1, but most likely this will happen in the first half of 2019. We understand that this is a critical feature for token holders, so we are not going to stop working on its development.

However, like any startup, our resources are limited, and part of the team is currently working on other projects, described above. For this reason, our progress in developing Masternodes is a bit slower than originally planned, and the release has been moved to a later date.

But we hope you understand the importance of every feature that Masternodes are competing for resources with and be patient while waiting for this release.

What will happen to the token?

Imagine that after 3 years, millions of suppliers have joined the SONM platform. They are distributed around the world, and some of them are individuals, while others are entities. Any traditional company, in this case, would face some problems:

The company would have to create a complex fiat payment system with complicated accounting;

It will still be impossible to send money to some countries, or/and the transfer fee will be extremely high;

If the supplier did not receive a payment, the platform needs to spend time and resources to find out where the transaction was lost.

Having the Ethereum sidechain that performs fast transactions with minimal fees, we can now send the supplier payouts in the SNM tokens form, in any country in just 15 seconds.

The transaction cost is close to zero and suppliers can easily exchange earned SNM for fiat money at any of the popular exchanges.

This also leads us to the next journey: the buyer pays the deal with fiat money, we exchange fiat for SONM tokens and send them to suppliers. So the more customers/deals SONM has, the more demand for tokens and the volume rate.

Does this document replace the previous roadmap?

Everything described above is not a fantasy, a roadmap or even a whitepaper. We see SONM as a traditional startup that should generate revenue and try to reach breakeven. So, starting from this point, the features we work on will be dictated only by the market, metrics and customer needs.

Any startup in a fundamentally new market is faced with new data, which sometimes changes the development plans. Just one of the examples: it’s impossible to predict all the customer requirements and needs until they try the first version of the product and give feedback.

Thus, it’s normal when, after the feedback from the first customers, some of the planned features are sorted in order of priority, and the team focuses on the more important ones. As a good team, we will do it again and again in the future, as needed. We all need to be flexible not to miss important market trends.

We believe you don’t want the implementation of old policies when there are new ones that are of more benefits. Even if someone wants it that way, it will be unwise of the management, as the roadmap is not a monolith, and it must constantly be improved and updated.

If we talk about the previous roadmap, then many of its features have already been released. Others, such as Masternodes, Rating & SLA, Kubernetes, Trusted Platform, remains in the development priority list, while the rest have been removed.

To keep you informed of the development plans, we will publish quarterly updates, similar to this one.