Software Requirements:

Q: Will the software provided by the DBC team be Linux only or will there be support for Windows. If Linux, will the distro be Ubuntu 16.04?

A: If your machine belongs to a mining machine as a server, we only support Ubuntu Linux (version is 16.04). If your machine belongs to a client, we also support Windows and Mac.

Q: If Linux, are test-net compute node operators required to install any specific libraries and hardware drivers or will this be shipped as a pre-packaged solution for quick deployment?

A: DeepBrain chain will deliver all of the related programs, including libraries and scripts. You don’t need to install additional libraries. You simply need to follow a “step-by-step” guideline from the “installation guide and user manual”.

Q: If there is no Windows support, are node operators who face challenges with specific hardware drivers able to fully allocate the necessary resources (CPU, RAM, Storage) to a virtual image? I’ve had personal success with this model in the past and have found it to be more flexible with less compatibility issues than some specific Linux distros.

A: Yes, we will also deploy different container images for different AI training environment. It will be more flexible.

Hardware Efficiency Settings:

Q: Will the AI Training be memory intensive? This question was asked, as the user would prefer to underclock RAM to maximize efficiency, produce less heat, and consume less electricity, as well as ensure less stress on hardware.

A: Yes, we have higher GPU, disk, and memory requirement for AI training. Please confirm if you could meet it or not. If you don’t have larger memory, but your memory is more than 4GB, you could become one of our Storage Nodes. Neural networks memory depends on architecture, as some are chatty and tends to leave the GPU such as many RNNS. Most energy is exerted when computation leaves the GPU. That is why the global CPU memory should always need more than twice the aggregate of the total GPUs in the system to mainta5 the performance. Undercoating will not help training, and may hinder training performance.

RAM Requirements:

Q: We understand that the DBC AIMs leverage DDR4 RAM. Is DDR4 a minimum or can DDR3 be used in our systems for test-net and/or main-net, if different? Is there a specific speed (mhz) that is advised?

A: We suggest that you use DDR4, because DDR3 will be too slow. DBC has a higher requirement for speed, network, and hardware configuration. DDR4 is our reference architecture.

Q: What is the RAM requirement for 2 GPU, 4 GPU, and 8 GPU compute nodes? Is it 2x the amount of GPU memory on the system?

A: This is good question. You really hit the point. Yes, if you have 2GPU, 4GPU, 8GPU respectively, GPU memory should be larger. If you have larger memory, GPU computer power speed will be faster. Although we list minimum RAM requirement is 64G, we also welcome larger RAM configurations. We will list recommend DBC super node configurations as follows:

8GPU, 192G RAM

4GPU, 128G RAM

System memory should always be offered as more than twice of the sum of all the GPU Memory in the node.

Storage Requirements:

Q: How is storage used during typical workloads? Is it sequential or random access load? Do we happen to know if it’s more read or write intensive?

A: We will use more sequential load for read/write as storage nodes.

Follow-up question: This question was also intended for the AI compute nodes. In order to select a storage that doesn’t create performance bottlenecks, the type of IO pattern must be known. One possible answer would be that this will be investigated during the test period.

A: Sequential access load is applicable to AI compute nodes too. This is mainly for the uploading, downloading, read and write of documents.

CPU Scaling:

Q: It is normal for AI compute systems to have high bandwidth interconnects between GPUs. Unfortunately, this will generally not be possible for prosumer hardware. That said, will the workloads perform better with fast communication between GPUs? i.e. Will 2 GPUs in one computer perform better than 2 GPUs mounted in 2 separate computers?

A: Yes, the performance of 2 GPUs in one computer will be better in 2 separate computers.

Prioritized Mining Reward on Mainnet:

Q: Following the release of testnet software, is there a specific amount of time (days/hours) in which testnet compute node operators must be operational in order to participate in testnet and receive priority on the mainnet?

Q: If I join Skynet as a 2 GPU compute node on a qualified system (meeting technical specs) and am given priority for main net ‘mining’, can I upgrade this system to 8 GPU after testing is complete and receive the same mining priority on the upgraded system?

Q: Although not expected, will there be any type of compensation outside of mining priority on main net for qualified test net node operators? This could include anything from DBC points, DBC, electrical cost compensation, DBC bumper stickers :), etc?

General

Q: Will the mining reward of DIY compute node operators be based on the quantity of GPU’s running on a supported system or is it based on the efficiency of the system, using an algorithm or modeling monitor? (If based on an algorithm for system efficiency, we can expect that higher-end components may result in better ROI)

A: We will monitor the system efficiency and make assessments.

Q: To what servers or infrastructure will Skynet nodes communicate with for testnet purposes? Will this be a specific region, cloud platform, etc?

Q: Given next-gen hardware releases in the coming month(s), are we able to add a second, third or more compute nodes to Skynet before a specified timeline, if not the start date itself? Examples of important new hardware available soon but not now:

AMD socket TR4 Threadripper2 next gen CPUs and updated platform.

Release mid-August

NVIDIA GTX 1180 Next generation GPUs. Expected to have 12 GB memory, much higher bandwidth and tensor cores.

Release Q3

A: There will always be new hardware and we and due to limited resources or timing in the case of 1180 we may or may not get to all the ones to validate against our net upon release. We will announce the components in our support list. The CPU will not factor into the net as the bulk of computation is dependent on the GPU, though CPU is important data pre- processing component.

For investors, you may want to consider ROI. As new NVidia GPUs promise better computing performance, the extra cost might not be fully compensated.

Q: Will Skynet participants be provided some type of standardized benchmark software and workloads that we’re able to use, at will. independent tuning will be necessary at some point. If provided at the early stages of testnet, compute node operators will have the ability to make configurations and/or upgrade hardware to better support the needs of Skynet. Otherwise, the necessary level of communication back and forth between the DBC team and Skynet participants will likely be very high.

A: You can refer to our DBC AIM’s configurations for reference. The only thing you need to make sure before joining our network is to have Ubuntu 16.04 ready. All the rest can be configured with our scripts. We will also release a user manual.

Q: Will there be instrumentation included in the deployed software that monitors various helpful statistics, such as throughput, timing, data volumes, etc? These statistics are obviously critical so as to benchmark various configurations, which can then be referenced prior to mainnet, providing more information to interested parties in the blockchain space and ensure a more inclusive and competitive future for DBC in a shorter period of time.

A： We will provide a “installation guide” to tell you how to monitor your machine’s status, including, amongst others: CPU, Memory, GPU and disc.

Q: What is going to be the most utilized floating point precision for the DeepBrain Chain project? If FP16 or FP64 is going to be norm, would it make more sense to construct Titan V’s for each node instead of a GeForce 1080 Ti grade setup? I figured with FP16 (half percision) providing such gains with memory savings, that this may prove beneficial for this type of offering? With GTX 1080 Ti cards (GP102 chip) NVIDIA has limited the throughput rates for FP16 by 1/64 and 1/32 for FP64.

A: The GPU will support FP16 or FP32 (FP64 is not generally used for DL). Not all computation for DL training works effectively with FP16, in some portions of computation, FP32 is mandatory to support the level of accuracy. So, thus, Mixed Precision is a powerful tool for training. It’s supported in our platform, and integrated with CUDA 8 or later. 1080TI supports HP to DP.

Q: Will floating point precision be dependent purely on the size or type of the data set? Or is this purely in the hands of the developers and the Docker templates being served/offered?

A: The floats are defined by the developers.

Q: Would a node with two to four Titan V’s be plausible? Since CUDA doesn’t support more than one Titan V in SLI or NVLINK, there isn’t going to be a benefit to a singular Docker instance running. As I am understanding it, the system would only run ONE container utilizing ONE Titan V at a time. Am I correct on this assessment?

A : TitanV will work, and our platform will have them work in aggregate. Using the libraries, the system will be topologically aware of the other TitanV/GPUs in the system, and try to implement them in conjunction of one another.

Q:Should I plan for an 40/100 Gbps environment now? I know the website stated 10 Gbps for equipment, but I could see a need for 40–100 Gbps edge equipment in the years to come.

A: Between nodes, We usually use 100Gbps to ensure enough bandwidth, in line with our proposed 128GPU environment. We will have efficient libraries to have them optimized for productivity.

Q: What are the key requirements for the SSD SAN or storage miner？

A: Capacity and read／write IOPS.

Q: Will a higher read/write IOPS and/or RAID 10 type setup have more beneficial gains for the projects overall success and mining output?

A: Read/write IOPS will be one factor.

Q: Is JBOD mostly used?

A: No. We don’t recommend JBOD.

Q: What filesystems do you prefer? I have always favored ZFS, but unsure for this offering specifically.

A: We recommend XFS、ext4 or ReiserFS.



Q&A On Token Unlock And AI Cloud Computing Package Campaign



What is AI Cloud Computing?



DBC’s Cloud computing is a charge-per-use model, this allows usable, convenient and flexible network access according to individual’s needs, they can access the allocatable shared computing resource pool (resources including network, server, storage, applications and service). DeepBrain Chain’s AI cloud computing is distributed and connected via blockchain, offering customers usable, flexible and secure GPU computing resource. Customers can use AI cloud computing power to conduct deep learning, machine learning and other high-performance AI computing.

What is AI Training Net ?



AI Training Net is a core function in DeepBrain Chain R&D team’s development, it is a platform that consolidates computing resources and carry out matchmaking for computing power provider and computing power requester. The demand side can find suitable GPU power supplier in the AI Training Net according to their training needs, then pay the supplier DBC token to acquire the usage rights. Computing power supplier can download installation package software on our official website, deploy nodes on DeepBrain Chain’s AI Training Net, sharing their idle GPU power with the entire network. When Can I Start Using The Cloud Computing?



August 8th, 2018, when DeepBrain Chain’s AI Training Net will open to all enterprises and individuals in the AI industry or anyone with training needs. How Do I Buy Cloud Computing Package?



At the moment, DeepBrain Chain’s AI Training Net only support AI Cloud Computing Package as the payment method. Users can use ETH to buy the package on our AI Cloud Computing Package purchase page on DeepBrain Chain’s official website. In order for AI enterprises to experience our AI Training Net, the Foundation decided to release 150 million DBC tokens’ worth of cloud computing power to the enterprises and individuals affiliated with AI industry. The DBC bought in this round are to be used to rent GPU computing power, unused DBC can be released into personal wallets designated by the applicant after the chosen lock-up period has ended, and can then be traded into fiat on exchanges. Who Can Buy Cloud Computing Package?



AI enterprises or individuals working in AI field, scientific researchers and other parties that require computing power. What Are The Prices Of DeepBrain Chain’s Cloud Computing?



In the DeepBrain Chain AI Training Net there are three types of configuration: 2GPU, 4GPU and 8GPU. The price for using these GPUs will be set by the computing power provider within the reasonable range advised by DeepBrain Chain Foundation based on different configurations. The Foundation will rank the computing power providers according to how much they have been used on the AI Training Net, the ranking will decide in what order and when they can join DBC mining.

— — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — — —

Low Cost: DeepBrain Chain solves the core issue of AI enterprises having to invest heavily on hardwares. With DeepBrain Chain’s unique model, each mining node receives 70% of their income from the DBC rewarded by the system, and 30% from what the computing power requester pay. AI enterprises only have to take care of the 30% fee paid for the training conducted. Optimization Of Neural Network Computing Performance: DeepBrain Chain focuses on serving the AI industry, currently most AI products are developed on the basis of using deep neural network as core algorithm, so DeepBrain Chain has optimized operation on top of CUDA GPU, supporting mainstream deep learning frameworks such as TensorFlow, Caffe, CNTK and so on. High Concurrency: The number of users of AI enterprises is massive, hence it is necessary for DeepBrain Chain to achieve high-performance computing to support that. Through our unique equal loading technology we can have all node containers cooperating to share the concurrency pressure. Low Latency: Apart from the neural networking training that will take a long time, all other requests from users will be responded within seconds online. This requires DeepBrain Chain’s each module to have quick response and occupy as little resource as possible.

5. Privacy Protection: Protecting the privacy of all participants in the ecosystem is a must, participants can freely decide how much information they want to be public. We will use encryption algorithm and separation mechanism to achieve this.



6. Flexible Supply: The demand of AI enterprises is not distributed equally throughout times, there might be a peak time where the demand is ten times higher than usual, this requires proper and efficient respond to sudden flux through flexible scaling technology to achieve automatic deployment from container, so that the content in a container can be quickly copied and deploy onto other idle containers during peak time.



7. Automatic Operation: When a certain node container is malfunctioning there will be notification and, the malfunctioning node will be removed automatically with a new node added to the network.