The thing to realize is that recreating human intelligence is impossible. We can't even grasp how the brain works.

"Clearly, we can't build a brain," says Dharmendra Modha. "We don't have the organic technology, nor do we have the knowledge."

But Modha is trying to build one nonetheless, together with a vast team of researchers inside tech giant IBM and across various academic and government labs. The project began in 2006, when the India-born computer scientist founded the Almaden Institute of Cognitive Computing at an IBM research lab in Silicon Valley, and in the years since, he and his team have worked to recreate biological intelligence with computer hardware and software, first building a machine that mimicked a mouse brain and then a cat's and then a monkey's.

This week, Modha and his team tickled our collective imagination yet again with the news that they've developed a new kind of computer programming language – a language specifically suited to building applications that imitate the brain's ability to grasp the world around us, sift through the ambiguities, and immediately respond. The team envisions a headset for the blind that replaces seeing-eye dogs, and a solar-powered contraption that floats on the sea, looking for mines and checking for oil spills.

These are fascinating propositions – for many reasons – and predictably, the tech press is abuzz over IBM's "breakthrough" in cognitive computing. The only problem is that we won't see any of these applications any time soon. Even the new programming language is still in its infancy.

>'Clearly, we can't build a brain. We don't have the organic technology, nor do we have the knowledge.' Dharmendra Modha

In the short term, Modha's project won't change anything. It's more ambitious than that. After striving to clone the brain using everyday computer chips and good old fashioned C programming code, the team is now building a new type of chip – as well as a new programming language – that more closely resembles the brain. Or at least the brain as we know it. They're breaking with 70 years of tradition to rethink the way we design computers.

The point is that Modha and his team have not cloned a mouse brain or a cat brain or a monkey brain. They've merely tried to replicate parts of these biological systems – and they've come to the realization that they can only go so far with existing hardware and software.

In the 1940s, a polymath named John von Neumann described a digital computer, laying out a basic architecture that included a central processor for spinning through a list of instructions, a memory system for juggling data on behalf of the processor, and a storage system for housing software that tells the processor what to do. Today's computers still rely on this "von Neumann architecture," but Modha and his team envision something entirely different.

Known as a "neurosynaptic core," their new chip includes hardware that mimic neurons and synapses – the basic building blocks of our nervous system and the connections between them. And in recreating the basics of the brain, Modha says, the chip eschews traditional methods of computer design. "This tiny little neurosynaptic core really breaks from the von Newmann architecture," Modha says. "It integrates processor and memory and communication." The idea is that you could then piece multiple cores together – creating ever larger systems, spanning an ever larger number of fake neurons and synapses.

IBM's new programming language then provides the tools needed to map software onto this vast array of neurosynaptic cores. "It is not meaningful to adapt languages from the past era to this architecture," Modha says. "It is like forcing a square peg into a round hole." With the new language, coders can create a self-contained software module that executes a particular function across all cores, such as the ability to detect sound or identify color. These modules – or corelets – can then be combined to create larger applications.

Modha compares the project to the creation of FORTRAN, the seminal program language that taught the world how to build software for von Newmann machines. FORTRAN was also designed at IBM.

Dharmendra Modha. Image: Courtesy Dharmendra Modha

But he's quick to say that he can only hope that his cognitive computing work will have the lasting impact of the von Newmann architecture. And he's clear that he and his team haven't even begun to build real-world applications. They're building a foundation for the future – and they aren't that far along.1 They haven't released their programming framework to the public. Though they've built 150 "corelets," the language is still in development.

"There is not really much I can say," says Kunle Olukotun, a Stanford University professor who specializes in parallel computing and programming languages for "weird" architectures. "This is the first I've heard of IBM's new cognitive computer programming language, and there are no details on the web that distinguish the approach from conventional programming languages."

As Modha and his IBM team rebuild the very substrate of computing, others are taking a different route to artificial intelligence. A group centered around University of Toronto professor Geoffrey Hinton is creating new algorithms that seek to recreate brain behavior using existing computer hardware. Hinton now spends at least part of his time at Google, and his algorithmic research is already reflected in systems used to recognize speech on Android phones.

In other words, it's useful – right now. But IBM's Modha is looking further ahead, and he says that the new computing substrate being built by IBM could be used in tandem with Hinton's work or any other algorithmic system that seeks to clone the behavior of the brain. He's working toward the same goal as the Hinton crew, but he's leaping over the short term benefit.

Is this worth doing? Of course it is. "Trying to mimic the way the brain does computation – in order to achieve the sorts of capabilities that biological systems have – is a worthy goal, and it's something that lots of people are interested in," says Olukotun. But the goal won't be realized for quite a while. If at all.

1Correction 1:20pm EST 08/09/13: An earlier version of this story said that IBM was only simulating its neurosynaptic codes on existing hardware. The cores have been demonstrated with silicon.