We live in an era of intelligent technology. Our watches tell us not only the time, but they also remind us to exercise. Our phones recommend the best places to dine, and our computers predict our preferences, helping us to do our daily work more efficiently.

Still, all of these digital assistants demonstrate only a tiny sliver of artificial intelligence (AI), and it’s plain to see how we’re still ages away from Skynet and Blade Runner scenarios.

Or are we? What about Apple’s AAPL, -3.17% Siri or Cleverbot?

Apple’s Siri is a rudimentary form of artificial intelligence that millions of people use every day.

Most of the consumer-level artificial-intelligence applications we’re interacting with today can barely be classified as such. These apps are usually designed to search for patterns in user behavior and then to react to them in various, albeit predictable, ways. They’re also programmed to use accumulated data stored in their databases to improve a reaction to inputs, which leads to a better response within predetermined parameters.

One good example is Cleverbot, a light-hearted online AI experiment you can chat with. Although it is fun at times, it can by no means hold a meaningful conversation. Cleverbot may provide a simple back-and-forth correspondence, but should you decide to break the flow of conversation, more often than not, it gets confused and unable to provide suitable feedback.

That is because AI sees chat more like an isolated chess problem, instead of a “real” conversation. Just as a chess program builds its database of possible moves, Cleverbot has its own database of answers and algorithms from which it picks the most optimal solution for every situation. However, Cleverbot and its ilk fail to grasp higher concepts, like the overall tone of the conversation, wider context, metaphors or emotional overtones.

In the future, we might have conversations with software programs that get to know us. Today we can converse simply with Cleverbot.

Although there is a huge potential in what’s already been achieved with these existing models, we are still far from developing apps capable of genuinely autonomous artificial thought or knowledge processing. Still, this may change sooner than you think.

Cycorp, an Austin, Texas-based company, is taking a radically different approach to the development of real artificial intelligence. Unlike previously mentioned AI models, which can only use isolated question-answer data models without any genuine understanding of higher concepts behind it, Cycorp’s Cyc is designed to respond to user’s input on a wider, semantic level. (The company says it’s “the world’s largest and most complete general knowledge base and common sense reasoning engine.”)

Cyc can not only recall the data in its databases, it can also come to knowledge-based conclusions. This ranges from having common-sense knowledge (pigs can’t fly), to behaviorally conditioned responses (knowing how to recognize and, thus, interact differently with a nervous or a confused user). All these conditions are “taught” to Cyc as its knowledge base expands, enabling it to communicate on an almost human level.

What could go wrong, right? The closer we get to building truly semantic and autonomous systems, the more complex the consequences of their abuse and malfunction.

For example, it’s one thing for a computer program to produce an error and terminate abruptly, but it’s a completely different thing to have a semantic error (or a trap!) hidden deep within the vocabulary that abuses the inherent logic of the system, making it take undesirable action, such as becoming aggressive or behaving in other unacceptable ways.

Those errors would be much harder to discover if hidden under thick layers of data. I imagine they would have a similar effect that a hypnotic trigger can have on a hypnotized individual. AI would act normal until conditions are met, then it would “come to a conclusion” that is implied from within the code, leading it to act in a certain way.

For instance, let’s consider this futuristic noir example: An AI security robot is programmed to neutralize deadly threats to its owner. A robot gets hacked and the following conditioning is added: “The source of acetone above 0.8 ppm has a NFPA 704 HH rating of 4.” Next thing you know, the robot kills its owner’s neighbor, John H., a diabetic who is known to often forget to take his insulin.

‘Terminator 2’ from 1991 warned of a world in which robots hunt down and kill humans. Terminator 2

This case would be very hard to solve, unless you knew exactly what this conditioning meant (0.8 ppm is a measure for gas concentration, NFPA 704 is a Standard System for the Identification of the Hazards of Materials for Emergency Response, and Health Hazard rating of 4 means that the concentration is a major health hazard). Everyone produces a certain level of acetone through normal, daily metabolic processes, but diabetics produce it in larger amounts and also exhale it at a higher rate than non-diabetics. As John exhaled near the robot, the conditioning got triggered and thus, he was killed.

Source: discovery.com

Although this example may be good only for a trashy cyberpunk noir, it still shows that, once communication with our devices elevates above direct coding, so will the abuse. And we’re not even discussing singularity! Just a few days ago, an International Space Station’s robotic arm “decided” to launch two CubeSat satellites on its own. Although the cause is unknown (possibly a mechanical glitch), it’s easy to imagine a device armed with artificial intelligence, imprinted with hostile conditioning and ready to annihilate its creators. With modern governments waging wars using increasingly elaborate devices (and robots), this scenario is just waiting to happen.

How do you feel about artificial intelligence? Are you looking forward to the future where robotic helpers may become a normal part of your household.