You can’t swing a dead cat six inches these days without hitting a a new Kickstarter project. When it works, Kickstarter is a great way to tie niche ideas or new concepts directly to people interested in funding the exploration of those concepts. Not every type of product works with crowdfunding, however, and some of the ideas being marketed through the site attempt to hide underlying flaws that make the venture a dubious proposition.

Case in point: Adapteva. The fledgling company has designed a many-core floating point processor with extremely low power consumption. Current designs can scale up to 4096 cores and are built on a mainstream 65nm process to save money. The company’s website and introduction contain a great deal of information on topics we’ve also covered. It identifies the scaling issues facing multi-core/many-core devices and the challenges associated with scaling current architectures. So far, so good.

What Aptiva is campaigning for on Kickstarter is sufficient funding to build a completely open platform that provides both programming environments and commodity hardware for parallel programming. This platform, dubbed Parallela, eschews NDAs and licensing and is targeting a $100 price point. Hardware specs are as follows:

Dual-core ARM A9 CPU

Epiphany Multicore Accelerator (16 or 64 cores)

1GB RAM

MicroSD Card

USB 2.0 (two)

Two general purpose expansion connectors

Ethernet 10/100/1000

HDMI connection

Ships with Ubuntu OS

Ships with free open source Epiphany development tools that include C compiler, multicore debugger, Eclipse IDE, OpenCL SDK/compiler, and run time libraries.

Dimensions are 3.4 x 2.1 inches

The dual A9 cores are necessary, Adapteva’s Epiphany Iv processor is an FPU co-processor, not a full chip in and of itself. The company claims that “the Parallella computer should deliver up to 45GHz of equivalent CPU performance on a board the size of a credit card while consuming only 5 Watts under typical work loads. Counting GHz, this is more horsepower than a high end server costing thousands of dollars and consuming 400W.”

Reality would like a word with you

The problem with what Adapteva is claiming is neatly summarized by a blog post on the company’s own website. On September 7, Andreas Olofsson published a list of parallel processing efforts by different companies. According to him, “There have been some bright spots for application specific parallel processors with limited programmability, but the success rate of general purpose parallel programmable processors is an approximate 0%. I compiled the the following list to stay sober regarding our own chances to succeed as a parallel processor company.”

There are 84 separate initiatives listed.

The reason for this is pretty simple. The Epiphany IV architecture, like a number of many-core architectures, dumps most of the features that CPUs (both RISC and CISC) have relied on to boost performance over the past thirty years. There are no caches; each core is assigned its own slice of RAM. Cores can access data held by other cores, but the latency impact will inevitably be considerable. The goal is to create parallel structures that can be split into independent workloads that fit in each core’s local memory. The specs above imply that each Parallela platform will have between 16-64MB of RAM depending on the final number of processors.

The reason Intel, AMD, and Nvidia haven’t gone down this road is because the final product is extremely specialized. Adapteva has created an FPU co-processor that’s extremely good at a very narrow set of tasks. Unfortunately, this is completely out of step with the general goals of the computing industry. Over the past thirty years, controllers and co-processors that once required their own expansion cards or motherboard sockets have steadily moved away from separate hardware implementations and towards integration — first on the motherboard, and now on the processor.

The problem with using Kickstarter to fund a venture like this is that Adapteva is drastically overselling what the Epiphany IV can actually deliver. 16-64 tiny cores with small amounts of memory, no local caches, and a relatively low clock speed can still be useful in certain workloads, but contributors aren’t buying a supercomputer — they’re buying the real-world equivalent of a self-sealing stem bolt.



Now, if you happen to need a self-sealing stem bolt, that’s a fine thing. For the 99.9% of tasks that aren’t suited to the particular features of a stem bolt, it’s not very useful. Seeding a few thousand development kits to contributors is an interesting way to drive grass roots adoption, but there’s no reason to think that the real future of many-core computing will rely on thousands of slow, tiny cores with minimal features. If anything, current trends point in the opposite direction, and the company’s grandiose promises of supercomputing leave us even more suspicious.