The robot Eve undergoes a version of the Turing Test in the 2015 film "Ex Machina." A24 YouTube Channel One of the biggest misconceptions about artificial intelligence (AI) is thinking that it must pass the Turing Test to be truly intelligent.

But AI scientists say the test is basically worthless and distracts people from the real AI science.

"Almost nobody in AI is working on passing the Turing Test, except maybe as a hobby," Stuart Russell, an AI researcher at University of California, Berkeley, told Tech Insider in an interview. "The people who do work on passing the Turing Test in these various competitions, I wouldn't describe them as mainstream AI researchers."

Named for the famed computer scientist Alan Turing, who proposed it in a 1950 paper, the Turing Test tasks a human evaluator with determining whether he is speaking with a human or a machine. If the machine can pass for human, then it's passed the test.

Last summer, Eugene Goostman, a computer program with the persona of a teenage Ukrainian boy, passed a Turing Test competition at the University of Reading, according to a press release.

The program fooled at least a third of the 30 judges in to thinking it was human. Kevin Warwick, an AI researcher and one of the event's organizers, declared that this was the first time a computer program had truly passed the test, in a "milestone that will go down in history as one of the most exciting."

Shtetl-Optimized/ Scott Aaronson But, soon after the announcement, critics started testing Eugene out for themselves.

The chatbot is no longer available online, but most transcripts show how it relied on humor and misdirection to confuse the judges and often repeated unintelligible responses.

In short, it was pretty lame.

In fact, the program's design as a 13-year-old boy with a bad grasp of English may have been why at least 10 of the judges were fooled.

According to The Guardian, Eugene's creator Vladimir Veselov said his age made for a perfect smokescreen to the programs failings, making it "perfectly reasonable that he doesn't know anything."

Many researchers, like Gary Marcus, cognitive scientist at New York University, get frustrated when the press picks up on these kinds of stories. He told Science Magazine that such competitions test AI that are more akin to "parlor tricks" than to a "program [that] is genuinely intelligent."

Detractors like Marcus and Russell argue that the Turing Test measures just one aspect of intelligence. A single test for conversation neglects the vast number of tasks AI researchers have been working to improve separately, including vision, common-sense reasoning, or even physical manipulation and locomotion, according to Science.

Russell, who is also co-author of the standard textbook "Artificial Intelligence: A Modern Approach," said the Turing Test wasn't even supposed to be taken literally — it's a thought experiment used to show how the intelligence of AI should rely more on behavior than on whether it is self-aware.

Benedict Cumberbatch as Alan Turing in the biopic "The Imitation Game." Jack English courtesy of Black Bear Pictures "It wasn't designed as the goal of AI, it wasn't designed to create a research agenda to work towards," he said. "It was designed as a thought experiment to explain to people who were very skeptical at the time that the possibility of intelligent machines did not depend on achieving consciousness, that you could have a machine that would behave intelligently ... because it was behaving indistinguishably from a human being."

Russell isn't alone in his opinion.

Marvin Minsky, one of the founding fathers of AI science, condemned one competition called the Loebner Prize as a farce, according to Salon. Minsky called it "obnoxious and stupid" and offered his own money to anyone that could convince Hugh Loebner, the competition's namesake who put up his own money for the prize, to cancel it altogether.

When asked what researchers are actually working on, Russell mentioned improving AI's "reasoning, learning, decision making" capabilities.

Luckily, NYU researcher Marcus is designing a series of tests that focus on just those things, according to Science. One proposed test would require a machine to understand "grammatically ambiguous sentences" that most humans would understand.

For example, with the sentence "the trophy would not fit in the brown suitcase because it was too big," most people would understand that the trophy was too big, not the suitcase. Such understanding is often difficult to program, according to Science.

Marcus hopes that the new competitions would "motivate researchers to develop machines with a deeper understanding of the world."