For decades computer scientists have been struggling to design an artificial intelligence sophisticated enough that it could pass for a living being. The fruits of that labor so far are snarky chatbots and systems that can crunch large amounts of data and spit out factoids. Google began working on a new method of replicating neural networks using 1,000 computers tied together. Now, one of the researchers that helped Google do it has laid out the framework for an even better brain model that costs a fraction as much. The key to true AI might be the GPU.

Traditional artificial intelligence computing has relied on bundling as many processors together as possible. With increase throughput, researchers believed the problem of machine thinking could eventually be brute-forced. It now seems that approach will only get so far, hence the project at Google with so-called Deep Learning. The modest goal was to get a system that learned what a cat looked like, and was able to spot them in YouTube videos. On this count, Google succeeded.

A Stanford researcher by the name of Andrew Ng worked with Google on the cat project, but was dismayed at the cost of the system. Ng believed if AI was to take off, it needed to come down in price. He recently published a paper laying out his vision for a cheaper AI test bed based on GPUs instead of CPUs. This isn’t the first use of GPU computation, but it might be one of the most ambitious. While CPUs are easy to network and blend, GPUs are much more temperamental.

By utilizing GPUs as the muscle behind an AI program, Ng claims first-generation rigs could cost as little as $20,000. That’s definitely out of reach of the consumer market, but well within the budgets of many computer science researchers. The original Google Deep Learning system cost over $1 million. The goal here is to do for AI research what Apple and Microsoft did for the personal computer.

To test his hypothesis about GPU-driven Deep Learning, Ng and his team built a larger version of the proposed platform costing about $100,000. It utilized 64 Nvidia GTX 680 GPUs on 16 computers. It was able to accomplish the same cat-spotting tasks the Google system, which needed 1,000 computers to operate.

Deep Learning might be the best route to a true AI system if scientists are able to harmonize GPU computing. Ng and his team are working on custom Nvidia CUDA code that makes the magic happen by efficiently combining resources and allowing for fast task switching among the connected graphics processors.

Ng has not yet decided if the specialized software and hardware designed to test his hypothesis will be open source. Even if it isn’t, the paper explains some of the algorithms and techniques involved. Other AI researchers are sure to follow up if only to prove Ng wrong.

Now read: IBM takes a step towards building artificial semiconductor synapses

Research paper: Deep learning with COTS HPC systems (PDF)