Though artificial intelligence is often maligned by futurists and others as something to fear, what about the everyday, humdrum actions a robot may have to carry out, such as knowing you can put food on a table but you can't eat the table?

Turns out, AI is not yet sophisticated enough to grasp some common-sense knowledge about how words, especially those for physical objects, interact with one another, a group of scientists says.

"When machine-learning researchers turn robots or artificially intelligent agents loose in unstructured environments, they try all kinds of crazy stuff," study co-author Ben Murdoch, an undergraduate student of computer science at Brigham Young University in Utah, said in a statement. "The common-sense understanding of what you can do with objects is utterly missing, and we end up with robots who will spend thousands of hours trying to eat the table." [5 Intriguing Uses for Artificial Intelligence (That Aren't Killer Robots)]

To help AI learn what actions are appropriate for an object, a team of computer scientists led by doctoral candidate Nancy Fulda of Brigham Young University read to their artificial-intelligence systems the ultimate bedtime story: they downloaded the entirety of Wikipedia, as it was about 16 months ago, and had their AI read it word for word.

Fulda and her team used a simple neural network — a type of AI that processes information similar to how interconnected neurons in the brain do — to scan Wikipedia. The neural network kept track of certain words, along with the four preceding and following words. With that information, the AI could learn to predict what words might surround the target word and compare that to what was actually there.

"So you say to the AI, 'You have one task: Given the word in the middle, predict all the words around it,'" Fulda said. The researchers repeated this process for every word in the English language. With that information, the AI can put together a base of common-sense knowledge that includes what sorts of verbs might go with a given noun and vice versa.

The ultimate test? Having the AI play an old-school text-based adventure game like those that were popular in the 1980s in which the player must navigate, often in an adventure or fantasy scenario, with simple commands because graphics displays weren’t yet common in gaming..

"What will usually happen is the AI matches nouns and verbs to try and win, but it will try all sorts of things like 'Bulldoze Santa Claus,'" Fulda told Live Science. "But when you use our algorithm, it tries common-sense things. It may not be the correct answer, but it makes sense."

For example, when faced with a locked house in a forest, the trained AI will try commands like "knock door," which is a typical response, but will also say things like "irrigate forest" and "burn house." While those don't make sense within the scope of the game, they demonstrate an understanding of the things that one can do with a forest or house.

Wikipedia is notoriously fluid, as anyone can edit a page, but Fulda isn't concerned that an internet troll might mess with her artificial-intelligence agent. That's because she used a snapshot of Wikipedia information, not a live feed. "Most common-sense knowledge doesn't change that fast."

The real concern, she said, is that all of society's biases and prejudices are embedded in the information found on Wikipedia; therefore, artificial-intelligence agents also learn those biases. The biases probably won't affect her AI as it learns to interact with the physical world, she said, but they could cause problems in projects with broader scopes.

In that sense, Fulda explained, "common sense does not mean common knowledge that is true, but supposed knowledge that is common."

Originally published on Live Science.