This image was removed due to legal reasons.

Last week, police showed up at the home of Amsterdam Web developer Jeffry van der Goot because a Twitter account under van der Goot's control had tweeted, according to the Guardian, "I seriously want to kill people." But the menacing tweet wasn't written by van der Goot; it was written by a robot.


The police didn't press charges. They just asked van der Goot, 28, to delete the account. The bot account only exists now as a cached page; the offending tweet has completely disappeared from the Internet's surprisingly imperfect memory. It was a brief blip in the Twitter OMG machine, but the episode raises a fascinating and increasingly pressing question in these times of independent algorithms: Who is to blame when a robot does bad things?

This image was removed due to legal reasons.


Amazon's delivery drones and Google's autonomous cars are going to run into people (or each other). Algorithms are going to misbehave; they have already caused a Wall Street market crash and bought drugs online. Twitter bots are going to defame people; Stephen Colbert's bot, which combines the names of Fox News anchors with Rotten Tomatoes reviews, has already come pretty close. And we're going to have to decide what to do about it. Ryan Calo, a University of Washington law professor who specializes in robotics, warned in a law review article last year that unexpected behavior by robots will cause legal headaches for us and recommended the creation of a new federal agency, the Federal Robotics Commission, to "deal with the novel human experiences robotics occasion."

There were three potentially responsible parties in the case of the Twitter death threat: @jeffrybooks, the Twitter bot which pulled in random phrases from van der Goot's human account and remixed them, in this case into a menacing tweet; van der Goot, who gave the bot his own tweets on which to feed and live; and Clément Hertling, a Paris-based university student who wrote the software that powered the bot. Hertling, who goes by @wxcafe on Twitter, has made bots for several of his friends. "They give my software permission to post tweets using their bots' accounts," he says by email. "Then I set the thing up and it automatically gets the user's most recent tweets, and then starts mashing them up using an algorithm known as Markov Chains to create new, seemingly realistic sentences that 'sound' like the original user. This is all automated. Clearly, nobody could have known what it was going to say."


In this case, the bot itself got punished. It was killed off by its owner for its transgression at the urging of police. In the robot world, you can get the death penalty for a speech offense. Harsh! Who will stand up for robot civil liberties?


"Information by itself can commit a crime now," Calo said by phone. If it is indeed a crime. A Twitter bot saying it wants to kill people isn't really a threat because that bot can't show up with a gun in a dark alley. (At least not yet.) But somebody on the receiving end of that threat could take it seriously not knowing that it's a blustering bot. Here in the U.S., a yet-undecided Supreme Court case deals with exactly this issue: whether a man's Facebook post with violent Eminem lyrics — that was interpreted as threatening by his ex-wife — is a true threat that can get him into legal trouble if he didn't actually intend to hurt her. It would be much easier for American bots (and their owners) if the Supreme Court rules that empty threats are constitutionally protected.

I asked Calo if he thought any humans should take the fall for van der Goot's bot, if it came to that. "I don't know," he said. "The law has to come up with a thing to do. It would probably look at the person who put the technology into play. (Ed. note: bot owner van der Goot.) If someone builds a general purpose tool — Ed. note: bot builder Hertling — you can't go after them. In criminal law, you can't go after person breeding a dangerous dog, but the person who lets it loose."


Calo predicts we're going to start encountering this more often. In his 2014 article, he presciently offered as a hypothetical a robot tasked with cleaning that "accomplishes this task in a way that… happens to severely injure a person or damages her property." Well, last month, that actually happened, when a robot vacuum 'attacked' a woman lying on the floor.

https://twitter.com/kevinroose/status/564460609895288832

What if the woman whose hair got sucked up had been a friend of the robot's owner rather than the owner herself? Would the victim blame the malicious robot, its hapless owner, or the negligent company that forgot to program it with Asimov's laws? "We have victims without purposes. These people had no idea this would happen. It's really emergent," says Calo. "Maybe it'll give rise to a category of crime for simply putting something into play with potentially hurtful purposes."


Hertling, who created the software behind the threatening @jeffrybooks, says he was scared by his creation causing a police investigation, but that he has no plans the change the way he programs his bots. I asked him if he might remove certain language from their vocabulary such as "kill," "murder," and "rape." "The bots tweet in different languages, so making a blacklist would not make sense," he said by email. "And there is no other way to make them not tweet 'bad' things, except if their owners stopped [using any words which might be recombined into something menacing], which is basically impossible. It's not possible to do anything about this, because defining what is a bad thing to say is not 'logical.' It comes from society and feelings, and as of now (as far as I know) it's not really possible to make bots learn this."

Bots will be bots. They won't know if they're doing something wrong unless we program them to realize it, and it's impossible to program them to recognize all possible wrong and illegal behavior. So we've got challenges ahead. In the short term, Hertling suggested Twitter — and any other platforms bots might live on — could solve the offensive speech problem by allowing bots to self-identify in an obvious way as bots. "That would allow people (law enforcement included) to ignore what they say when it becomes problematic."


Robotics and drones can break my bones, but bot words can never hurt me?