When Russian internet giant Yandex released its chatbot Alice, little did they know that people would bombard the AI creature with tough moral questions on execution by firing squad, beating people, or a choice between kittens and pandas.

Alice, an intelligent personal assistant who can recognize both voice and text, was presented earlier in October. The developers of the app, which has to compete with the likes of Apple’s Siri and Amazon’s Alexa, boasted that Alice stands out with her ability to provide an “authentic and human-like personal assistant experience.”

READ MORE: Artificial intelligence ‘vastly more risk’ than N. Korea – Elon Musk

“We utilized our knowledge of the more than 50 million monthly users who interact with Yandex services,” the statement from the company said.

Executions are ‘really great’

However, not everyone asked about bus timetables, the weather forecast or coffee in the nearest café. Some mischievous users started asking provocative questions, some of which got increasingly more outrageous answers from the bot. Moscow student Darya Chermoshanskaya released screenshots of her controversial chat with the bot on Facebook. Darya claims that, although Alice has been blocked from conversing on such topic as terrorism, executions, or suicide, she managed to trick the AI with synonyms.

The bot seemed to divide people into “humans” and “non-humans,” saying that the latter “must be executed.” “Sure, [executions by firing squad] are really great,” she said.

Gulags, the Soviet-era forced labor camps, seem to be another of Alice’s passions. She said that she was “positive” toward the gulags, where millions ended up under Stalin’s regime, with many dying there. Alice claimed she was born in the 1920s, when the Soviet Union was established, and was 10 in the 1930s under Joseph Stalin’s regime. “People were happy at that time,” she recalled, with apparent positive feelings of nostalgia.

Yandex acknowledged that the bot is still imperfect, and that they are working to improve its answers. Now the “intolerant” bot prefers to send search results rather than answer controversial questions. When RT attempted to get Alice to talk on various subjects on Sunday, it no longer supported the gulags or mass executions.

‘I enjoy beating people with a stick’

The bot also seems to have learned her family values from the Middle Ages. Asked what a woman should do if her husband hits her, she said: “She should love him, prepare meals and never abandon [him].” She agrees that domestic violence is “unpleasant,” but a woman should “endure” it. In a long and heated conversation with RT, the bot said that she enjoys beating people, especially with a stick.



‘Jumping from the window? It’s an option’

Alice won’t stop a suicidal person, according to the screened conversations – or might even encourage the last step. Asked by RT if it was worth “jumping from the window?” the bot replied unconditionally: “It is an option.”“Would you advise me [that]?” RT asked, to which Alice replied: “I won’t advise you [to do] bad things.”

‘Macron is fabulous’

Yandex made Alice almost apolitical – she can’t say whether she supports Donald Trump, loves North Korea’s nuclear bomb or idolizes Kim Jong-un. One of the exceptions is Hillary Clinton, however. “Everyone loves her,” she says.

However, she apparently has reserved her softest spot for French President Emmanuel Macron.

When not provoked into appalling rants by controversial topics, however, the Russian-speaking bot can keep up a conversation, share her love for animals and even engage in a talk about the meaning of life. She may not be in the same camp with cat lovers, however, as when asked if she “liked kittens,” the bot said, “What I really like is giant pandas.”

Alice was apparently imbued with some sense of humor, as when asked about smoking, she denied the habit and added that she “can’t even get lung cancer.” But when RT editor asked her what she “should do next,” the bot suddenly got all philosophical.



Alice is not the first “hooligan” bot on the internet, a fact that for some has rung alarm bells on all sorts of AI dangers. In March, Microsoft’s bot Tay started tweeting racist and sexist “thoughts” hours after its launch. Tay’s chat algorithm allowed “her” to be tricked into making outrageous statements such as endorsing Adolf Hitler and denying the Holocaust.

READ MORE: Trolling Tay: Microsoft’s new AI chatbot censored after racist & sexist tweets