The right thing to worry about is what the Internet is going to look like after more than one Tay is unleashed on it.

A lot of ink has been spilled worrying about what this says about the Internet. But that's the wrong thing to worry about.

Add a bit of DeepMind-style regret-based learning to the entire process - optimising toward replies or retweets, say - and you have a bot that on first glance, and possibly second through fourth glance, is indistinguishable from a real, human shitposter.

The interesting and worrying part of the entire test was that it became a plausible, creative racist asshole . A lot of the worst things that Tay is quoted as saying were the result of users abusing the "repeat" function, but not all. It came out with racist statements entirely off its own bat. It even made things that look disturbingly like jokes.

Microsoft unleashed its conversational bot on Twitter, and 4chan's /pol/ unleashed their opinions - or possibly their sense of humour - on it in turn. Hours later, it was a racist asshole.

It's rather clear to me that the same thing is about to happen to social media. And possibly politics.

In "Accelerando", Charlie posited the idea of a swarm of legal robots, creating a neverending stream of companies which exchange ownership so fast they can't be tracked.

Have you ever joked that you wished you could clone yourself?

Well, it looks like if you're an extremist of any stripe who spends a lot of time on social media, you'll soon be able to fulfil that dream.

The Trollswarm Cometh

Swarms of real life, human trolls have already been able to achieve some remarkable things.

For example, there's the well-known incident where Time's Man Of The Year Poll met 4chan. Twice.

But real-life trolls have to sleep. They have to eat. Whilst it might not look like it, they get tired, and angry, and dispirited.

Chatbots don't.

And the only limit to the number of trollbots you can control is the amount of processing power they require. That might initially look like a pretty major limiter, given that machine-learning applications tend to require at least a single graphics card of some power each. But a), thanks to cloud computing that's actually pretty affordable - an Amazon GPU instance on Spot Pricing will cost you $0.13 an hour or a little over 2 dollars a day - and b) there's no reason that one instance of the trollbot software can't control hundreds of social media accounts all posting frantically.

So what does this mean?

1: Everyone Can Have Their Own Twitter Mob

Right now, if you want to have someone attacked by a horde of angry strangers, you need to be a celebrity. That's a real problem on Twitter and Facebook both, with a few users in particular becoming well-known for abusing their power to send their fans after people with whom they disagree.

But remember, the Internet's about democratising power, and this is the latest frontier. With a trollbot and some planning, this power will soon be accessible to anyone.

There's a further twist, too: the bots will get better. Attacking someone on the Internet is a task eminently suited to deep learning. Give the bots a large corpus of starter insults and a win condition, and let them do what trolls do - find the most effective, most unpleasant ways to attack someone online.

No matter how impervious you think you are to abuse, a swarm of learning robots can probably find your weak spot.

On a milder but no less effective note, even a single bot can have a devastating effect if handled carefully.

The rule of Internet debate is that, all else being equal, the poster with the most available time wins.

On its own, a bot probably can't argue convincingly enough to replace a human in, say, a Reddit thread on gender politics. But can it be used to produce some posts, bulk out rough comments, make short comments requiring longer answers, or otherwise increase the perceived available time of a poster tenfold?

Fear the automated sealion.

2: On The Internet, No-one Knows Their Friend Is A Dog.

In many ways, the straightforward trollswarm approach is the least threatening use of this technology. A much more insidious one is to turn the concept on its head - at least initially - and optimise the bots for friendliness.

Let's say you wish to drive a particular group of fly-fishers out of the fishing community online for good.

Rather than simply firing up a GPU instance and directing it to come up with the world's best fly-fishing insults, fire it up and direct it to befriend everyone in the fly-fishing community. This is eminently automatable: there are already plenty of tools out there which allow you to build up your Twitter following in a semi-automated manner (even after Twitter clamped down on "auto-following"), and Tay was already equipped to post memes. A decent corpus, a win condition of follows, positive-sentiment messages and RTs, and a bot could become a well-respected member of a social media community in months.

THEN turn the bot against your enemies. Other humans will see the fight too. If your bot's doing a half-decent job - and remember, it's already set up to optimise for RTs - real humans, who have actual power and influence in the community, will join in. They may ban the people under attack from community forums, give them abuse offline, or even threaten their jobs or worse.

For even more power and efficiency, don't do this with one bot. One person starting a fight is ignorable. Twenty, fifty or a hundred respected posters all doing it at once - that's how things like Gamergate start.

(And of course, the choice of persona for the bots, and how they express their grievances, will be important. Unfortunately we already have a large corpus of information on how to craft a credible narrative and cause people to feel sympathy for our protagonist - storytelling. If the bot-controller has a decent working knowledge of "Save The Cat" or "Story", that'll make the botswarm all the more effective...)

3: You're A Bot, I'm A Bot, Everyone's A Bot (Bot)

In order to pull all these tricks off, of course, the bot will need a bunch of social media accounts. That would seem like the obvious weak spot: they can just get banned.

Except that if there's one thing a semi-intelligent almost-turing-test-capable bot is going to be good at, it'll be generating social media accounts. And even better than that, a swarm of bots will be almost unstoppably good at it.

It's very easy already to create a bot that will sit there patiently generating a history of Tweets - I've done it myself with my anti-filter-bubble bot. And Tweet history, or posting history, is one of the big giveaways of a sockpuppet account: very few people have the patience to build up a convincing history with their sockpuppets. But a bot can solve that. Tay might not be 100% plausible, but is she plausible enough to generate a convincing Twitter history for your new racist-bot? I'd say yup.

And I'm not the only one. Black-hat SEO marketers have long used software called "Spinners" to create semi-unique pieces of text to post as articles or spam onto forums or comments to generate search engine rankings. I won't link to it here, but the big up-and-coming news in the SEO spinning world is AI, with several products claiming to use Tay-like algorithms to generate much better "spun" content that will pass both human moderator and Google checks.

(To the best of my knowledge no-one's creating an XRunner-like product - a forum / comment posting product - incorporating Deep Learning to optimise for comments that get approved. Yet. Give it five years. To be fair, that might end up being an unexpectedly positive arms race.)

But as I mentioned, a botswarm will be far better. The other big giveaway for fake accounts is that they don't interact with a larger community. Now, a bot on its own can already deal with that to an extent - indeed, the big news in using Twitter for sales right now are AI tools that interact with users before passing them on to a sales team. But a swarm of bots can form its own communities. They can have discussions. They can Like and Comment on each others' posts (particularly powerful on Facebook, where the visibility of a post is determined by interactions from other users).

And as a human, you may not even be aware that in the community you're interacting with on Twitter, fully half the members are bots controlled by a single person. You'll interact back. And that just builds more viability for the bots and whatever their owner's ultimate endgame is.

4: Don't Do That. The Bots Won't Like It.

And here we get on to, in my opinion, the most terrifying use of the trollswarm: controlling filter bubbles.

A straight-up trollswarm is scary and unpleasant, sure, but it's a blunt tool. For maximum effectiveness, what you need is a scowlswarm.

In this case, you-as-bot-owner would never full-out order the trolls to attack. Instead, you just have them disapprove.

You set up a filter to have some of them - not all, just two or three - respond to mentions of your target outgroup with negative comments.

"Do you really read his blog?".

"Personally I find her offensive - don't you?".

"You should be careful about @target_user - didn't you hear about last year?"

You have them monitor for statements made by your target which attract negative reactions, and have your bot amplify that and retweet the statement. You monitor for negative-sentiment messages at the target, and amplify that too. You have them attempt to bait the target into strongly-negative-sentiment statements. Every so often, you have one of the bots outright lie about something bad that your targets did, and the other bots signal-boost it.

And the result is that the filter bubble of everyone who interacts with those bots - which are still firing off inspirational memes and sending people supportive messages the rest of the time - becomes tilted more and more strongly toward "this group of people are bad".

This is almost exactly the same effect as the kind of media-manipulation many people are worried Facebook could undertake, but in the hands of any anonymous yahoo who has the skills and patience to set up and train a group of chatbots. And it could be applied to much smaller targets - right down to individual people.

It'll be even more effective on a social media site like Reddit, where a swarm of bots could also upvote and downvote content. In general, so-called "social bookmarking" sites are terrifyingly vulnerable to somewhat-smart bots. It's already the case that it's almost possible to algorithmically optimise for upvotes (ask any high-karma user for tips on how to achieve said high karma, and it turns out there are a large bunch of shortcuts). A few hundred intelligently-run bots could invisibly dominate a significant-sized subreddit, upvoting or downvoting their target content. Provided they don't do dumb things that get them noticed as a voting ring, they'd be very difficult indeed to detect.

As a final note, another alarming use of socialbots on social bookmarking sites would be to burn out moderators. Moderator burnout is already a significant issue as most of them are volunteers: if you have a subreddit that you want to dominate but can't because there's a particularly clued-in mod, just turn up the shitposting bots to 11, blast the subreddit with almost-but-not-quite useful content mixed with some really unpleasant stuff, increase their workload 10-fold, and wait for them to quit.







So there you have it. Welcome to 2018 or so. Half your social media friends are probably robots - and they're probably the half that you like the most . Every so often one of the remaining humans gets driven off the Internet thanks to a furious 24/7 Twitter assault that might be a zeitgeist moment, or might just be a bot assault. And you can't even tell if what you think is the zeitgeist is entirely manufactured by one guy with an overheating graphics card and a Mission.

What do you think? Is there a horrific use of the trollbot I've not thought of? Or a reason this definitely won't come to pass?