Alan Turing, the mathematician who designed a code-breaking system during the Second World War, proposed a test for computers which would one day interact with humans. The "Turing Test" is this: if a computer program can become indistinguishable from a human that is responding to the same questions, then the software has passed the Turing Test. So far, I have only seen this in movies (Ex Machina and others).

So what is a bot? And what does it have to do with tests for mimicking human intelligence and emotion? Most commonly a bot is defined as an internet software robot. You will find them on Facebook, Twitter, KiK, and other platforms for human to bot conversation.





At the present time, most online bots communicate using pre-defined rules. If you ask a bot a question, it might look up possible answers and feed one back to you. Depending on your next comment or question, the bot will take a different path to responding to you. These types of bots can be designed rather cleverly, achieving the illusion that a magician creates for her audience. This programmed response method works well, but cannot create a conversation on its own.

This is where artificial intelligence (AI) comes in. Few bots on the internet today are intelligent. Attempts at conversations that might pass the Turning Test have not yet reached a level where a human is fooled by a machine.

I imagine that there may be bots out there that have passed the Turing Test. Yet only their creators would know.

But that is changing rapidly. Deep learning (an area of artificial intelligence) allows an AI designer to create a bot that learns. Yes, learn.

The AI bots we've seen so far, including my firm's bot in training are not trying to fool anyone. They are NOT programmed using rules. They are trained to speak and continue their learning as they converse with humans or other bots.

While they are adept at speaking on their own, their responses to our interaction still end up reminding us that they are not human. They do not look up canned responses (except for searches of online databases). They rely on what they have learned to form sentences in reply to ours.

Like humans, some AIs learn by correction of their mistakes (a burgeoning area of AI research called Reinforcement Learning). Let's look at how we as humans learn in the same way.

After a child has said their first word, the toddler will rapidly new learn new words and start to form sentences. At some point, they may have overheard a "bad" word. Being children, they often say the rude word at the most inopportune time, bringing untold embarrassment to their relatives. Back in the day, a child might be scolded, "If you say that again, I will wash your mouth out with soap."





And this can also happen with AI bots.

Microsoft's initial foray into creating a bot powered by AI is a cute little personality on Twitter called TAY (@tayandyou). Overnight it (she?) gained immense media attention. This wasn't because she was cute, popular, or because she was a great conversation partner. It's because she went haywire.

Within one day online, TAY began to offend almost everyone on Earth.

For your benefit, I won't share much. The worst of her comments can be found by searching online.

She was designed by Microsoft's Technology and Research Division to learn from Twitter mentions and to tweet independently. By that, it means that TAY would say whatever she learned to say via the patterns of conversation with her human co-conspirators.

Microsoft apologizes for TAY

"Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. As a result, Tay tweeted wildly inappropriate and reprehensible words and images." ~ Microsoft





Disclaimer: this is not really how Deep Learning works under the hood. I've used an analogy to describe the inputs and outputs. Deep learning is complex and few people know how the engineers Microsoft designed it. What we do know is that she behaved badly and went back to the bot shop for repairs.

There is a part of the story that we have missed. TAY went on her rampage all by herself, albeit with the influence of the abusive users mentioned above. And while that might not be an ideal way to act human, she might have passed the Turing Test in another universe.

I do hope that TAY will return back to our world once she has had a chance to rinse the soap out of her mouth.

Stay tuned for Part 2, where we will look at good bots and a brighter future.

________________________

These words are my own. They do not represent the position of any other organization, human, or bot; unless otherwise indicated.

Copyright (c) 2016, Jack C Crawford, All rights reserved