Where is the Endor blockchain currently hosted?

ENDOR: Endor is a hybrid solution of on-chain and off-chain architecture. Computation and storage happens off-chain on AWS. Payments go through ERC20 Ethereum based on chain implementation.

Where is client data stored?

ENDOR: Client data is stored in AWS S3 buckets which are protected by means of security measures. We are ISO27001 certified, hence we have very strict data policies.

Can a client determine the AWS region if they face regulatory requirements to keep data in a specific country?

ENDOR: The clients can decide which region should be used.

As per last “tokenomics” report, tokens received by Endor will be locked for 1 year. What % of tokens are received by Endor when a client pays for services, and what % is received by the data providers?

ENDOR: 10% will be converted to single-use tokens for academia.

45% for data providers.

45% for Artificial Intelligence + AWS.

How much is Endor’s that will be locked up?

ENDOR: Currently the 45% for data providers and 45% for AI + AWS go to Endor. Therefore, this portion will be locked up for 1 year.

And it’ll remain in this split for the foreseeable future?

ENDOR: Yes.

Is there any kind of Endor wallet built into the platform or do clients have to retain custody of their EDR?

ENDOR: Right now, EDR is supported by any ERC20 wallet. In the future, we plan to integrate with specific wallets in order to make payments easier in the application layer for when a prediction is running.

Is the S3 bucket owned by Endor or the client — does the client just give a location and provide the right permissions? What is the largest data set that has been processed in this way so far?

ENDOR: At this stage, the customer can provide his AWS S3 bucket with the relevant credentials and Endor will take it from there. The current maximum amount of data for processing stands at 1–5TB per customer. Normally we would need less data in order to generate valuable insights.

Does the performance profile/cost change for encrypted data?

ENDOR: No, The cost is the same. We strongly suggest to encrypt the data. If the data is not encrypted, Endor’s pre-processing phase does the encryption prior to the data being processed due to the fact that we are basing our signals on correlations and not on semantics. This way, we keep your data safe at any given time, even if you decide to release it as is without encrypting it.

Got it. So no matter what — the data is encrypted. Is that where Enigma comes in or not? Are you scaling the EC2 platforms on demand?

ENDOR: Enigma (ENG) only comes in later, i.e.; at the stage where we are already processing customers’ encrypted data. Enigma’s capabilities will be used to keep predictions extra safe (“double” safe) for private customers such as banks. This will help to make sure that encrypted data does not become accessible through the processing servers while predictions are generated.

Endor and Enigma will work together in the near future once Enigma is ready to run heavy processing algorithms on large amounts of data.

Endor plans to connect to the data marketplaces of Enigma (and other potential partners such as Ocean Protocol) in order to generate even richer predictions for data-heavier customers.

Does it only work on structured data?

ENDOR: Correct. It can work on highly structured as well as poorly structured data. The onboarding phase of the data structures and cleanses the data.

Are there plans to integrate with data sources other than S3?

ENDOR: Sure, we are planning to connect to a variety of data platforms.

Is the data onboarding process automatic or is it more like a professional services SOW that you go through with each new client?

ENDOR: Data onboarding is an automatic process which is currently monitored by Endor and in the future planned to be self-served. (This is part of our roadmap — see www.endor.com/roadmap)

How does a client determine how much EDR they need to purchase?

ENDOR: The price will be determined by the data and prediction types which clients plan to run. The prediction cost has several cost factors. Each part will be priced accordingly, depending on the amount of data to be processed, the cost of data that is being used, etc.

There are additional aspects that are important when it comes to calculating the prediction cost. At this stage, we are working with our Beta partners to reach the optimal prediction cost evaluation, making it clear for the end-user.

An example of an aspect which can strongly affect cost is the complexity of the prediction generation due to data condensation.