A cluster of computers on Carnegie Mellon’s campus named NELL, or formally known as the Never-Ending Language Learning System , has attracted significant attention this week thanks to a NY Times article, “Aiming To Learn As We Do, A Machine Teaches Itself.”

Indeed, the eight-month old computer system attempts to “teach” itself by perpetually scanning slices of the web as it looks at thousands of sites simultaneously to find facts that fit into semantic buckets (like athletes, academic fields, emotions, companies) and finding details related to these nouns. The project, supported by federal grants, a $1 million check from Google, and a M45 supercomputer cluster donated by Yahoo, is trying break down the longstanding barrier between computers and semantics.

This is not the first time researchers have tried to tackle one of the great, elusive white whales of the programming world, but as the NY Times points out, the way NELL proactively creates and continues to expand its knowledge base is unique.

And yet despite all of NELL’s initiative and innovation, she needs help.

She is accurate 80-90% of the time, according to Professor Tom Mitchell, the head of the research team (see our demo with Mitchell above). For that 10-20% where NELL misses the mark the results can be somewhat comical— for example, according to NELL, AOL’s parent company is CarPhone and the Palm Treo is an Apple product. Mitchell and his small team are trying to clean up errors as they surface but with nearly 400,000 facts and counting, it’s a gargantuan task.

That’s where the online community comes in.

Currently, you can access NELL’s knowledge base, via the “Read The Web” project homepage. Here you can peer into NELL’s brain by searching for terms or download the entire database, if you so desire. The next step is turning readers into pseudo-editors. Starting sometime next month, Mitchell will open NELL ‘s database to anyone who wants to help edit and flag errors. “We’re soon going to be adding some buttons by these beliefs, as you browse, so if you see a mistake you’ll be able to click a button and say I don’t believe this… I think that will be very valuable to us,” Mitchell says. ”

While this may remind you of Wikipedia’s model with its crowdsourced method of submission and editing, the NELL community will be tinkering with the content and more importantly, the engine. Every correction helps NELL “learn” about facts, relationships and the mechanics of language, which will help it avoid future mistakes. By unleashing the power of the internet on NELL, the system’s intelligence has a chance to grow exponentially, which will help the CMU researchers achieve one of their ultimate goals: to get computers to read, fully understand and return complete sentences.

To help with this process, Mitchell is also looking at alternative avenues to up NELL’s IQ, including gaming mechanics. He gave TechCrunch a first look at an upcoming game he plans to launch called “Polarity,” created by Edith Law, Burr Settles and Luis Von Ahn. In this game, a user will be randomly assigned to another user on the web. Each player will be given a keyword like “longtail salamander,” one user will have to click on the words that describe the keyword, while the other user will have to click on the words that do not describe the keyword. All these answers will feed into NELL’s engine and augment the system’s understanding of relationships.

So why should we care about NELL, a computer system that is still riddled with errors and so far seems pretty useless compared to Wikipedia or Quora? Because the engine behind NELL, and similar computer systems, could dramatically alter our relationship to computers and the web, the way we search (Google’s participation is no coincidence), how we gather information, or get our morning news. Although NELL doesn’t exactly “learn as we do”—- I don’t know many people that scan thousands of webpages simultaneously for statistically relevant information— this project is about helping machines comprehend the world the way we do by building a knowledge base that mimics the ones we (as individuals) spend decades building.