this infowars egg is the record holder



it argued with the bot for... almost TEN HOURS. yes, really. pic.twitter.com/DiQdNd8azw — Sarah Nyberg (@srhbutts) October 6, 2016

It's not exactly a coincidence that Nyberg has been able to create a language so familiar and responsive to these elements, as she was a member of the same online communities that birthed so many of them. Like many of us with a background in 90s/00s chatrooms and forums, the nature internet arguments comes easy, however, some of us grew up to temper that with some amount of respect for humanity in general. Some have not, and in a turn, Nyberg has been targeted by Gamergate-related harassment over the last couple of years.

Still, Arguetron is by design not abusive or malicious in its tweets, and does not actively seek out adversaries. That's in contrast to some bots, like Nigel Leck's 2010 project @AI_AGW, which hunted down global warming deniers to provide automated fact-based responses explaining the science. One Hacker News commenter described it at the time as a "pro-active search engine," able to answer questions people didn't even know they needed correcting on -- particularly interesting given the current trend of messaging bots launched by Google, Facebook and others to do just that. Other examples include the SNAP_R bot that security researchers used to phish Twitter users, and @BrandLover7 which absolutely loves your product.

Thomas Kuhlenbeck

I chatted with Sarah, and she explained that a big part of the motivation is not to engage in harassing behavior, but to "expose reactionaries and harassers." Since the bot doesn't automatically tweet at anyone, it only picks arguments with the folks who are searching Twitter for keywords to argue. As she puts it, "I'd like the project to help people critically look at how toxic Twitter can be, especially for people expressing these kinds of opinions. That it also makes the people engaging in this sort of behavior looks ridiculous is a nice side effect."

No matter what ends up happening to Twitter, it would be nice if whoever controls it took a look at these behaviors and applied it to addressing abuse on the platform. Unfortunately, I think there's little indication that will happen under its current administration. Of course, if any of those Silicon Valley companies working on bots need a side project, assigning everyone an AI might be a worthwhile 20 percent project. While it can't address the very real issues of stalking and harassment that affects our safety, at least this way trolls get the attention they so clearly crave and the rest of us keep the time they're hoping to steal. It's a win-win.