So there’s this guy. Let’s call him James. He considers himself “fairly ordinary” and is approaching 30. He seems articulate, intelligent, well socialised. He describes his politics as “reasonably left-leaning”. There’s a good chance you’re similar to him: every day he tends to his office job, works out, sees friends, comes home. Then, should the mood strike him – and it often does, compulsively – he may log on to one of his 11 accounts on Reddit. Each one is tailored to a specific persona or ideological bent: he could assume the character of a middle-aged mother and rabid anti-vaxxer, a nutty leftist ideologue, an arch-conservative. On occasion he adopts the moniker “Jewliar Gaglard”.

James has a singular talent for antagonising others. Over time it has become something of a hobby, this habit of carefully laying out the bait, waiting for a reaction, performing these acts of micro-violence – “trolling”, as such behaviour is known. And, like a true hobbyist, he’s both obsessed with this hobby and driven by it. The goal is simple: to bait other users with wilfully antisocial behaviour. If they retaliate, all the better. He says the hate he receives gives him a “weird delight”; his anti-vaxxer persona often receives death threats. He does, however, draw the line on commenting on the relationship subreddit, where people reveal vulnerable, intimate information about themselves, and is quick to delineate trolling from bullying.

His most popular post in the Australia subreddit was one where he satirically called for the assassination of Tony Abbott. “I like to see where people’s prejudices will lie, how far left I can push a discussion,” he says. He fretted briefly about surfacing on the Australian Federal Police’s radar, but then realised his post had become the most up-voted comment before it was deleted. “That was a weird moment,” he admits. “This is what people think is acceptable.”

There are probably many people who would view James’s behaviour as offensive or churlish. He can’t pinpoint when he became a troll, though he suspects the seeds of antagonism were sown in his teens as a player of games such as Counter-Strike. “I would always find the amount of fun I was having playing these games would be inversely proportional to the fun that everyone else seemed to be having,” he says. So he would “grief” his teammates – sabotage their game play, be a nuisance – and was kicked off many game servers. Things didn’t escalate until he was studying his master’s degree. Suffering from “excruciating writer’s block”, his academic adviser recommended James practise writing, which he did in the form of posting “inane and inarticulate” comments on Reddit. Anonymity is what gives life to trolling. “It’s really liberating, this way to like de-individuate yourself,” James says. He no longer comments on YouTube, as the service is linked to users’ Google accounts.

From the standpoint of the sites being trolled, combating such behaviour with comment moderation and the like is impossibly time-consuming. Perhaps inevitably, ways of automating resistance to antisocial online behaviour are being considered. An algorithm recently developed by Stanford University’s Justin Cheng, Cristian Danescu-Niculescu-Mizil and Jure Leskovec, aims to identify and weed out this kind of antisocial behaviour in online communities. Because the definition of trolling has morphed over time – once referring to antagonistic posts designed to provoke, but more often now used to encompass behaviour that could also be categorised as cyberbullying – Cheng says he and his team of researchers “rely on moderators of a community to tell us what kind of users are undesirable”. He says that “by studying users who are subsequently banned from a community, we can implicitly study antisocial behaviour without needing to create an all-encompassing or overly specific definition”.

The algorithm was tested on three US websites – conservative opinion site Breitbart.com, news site cnn.com and gaming site ign.com – and correctly identified instances of trolling on four out of five occasions after analysing between five and 10 posts. A classic telltale sign of a troll often lies in the poor quality of their writing – sloppy punctuation and overzealous capitalisation are dead giveaways. But this is by no means foolproof. It’s prone to false positives, for one thing – the algorithm has seemingly imposed a penalty for poor grammar. But it would also have a more difficult time discerning the trolls who are more literate and self-aware. “One big takeaway is that the community plays a significant role in making trolls who they are,” says Cheng. While some do “enter” the community as fully formed trolls, the behaviour and writing of many more commonly deteriorates over time. Which is probably why Reddit’s positive feedback loop lures trolls back to the lair like moths to a flame.

Such validation is intoxicating, but being a voyeur is a more attractive proposition: you get the cheap thrills with reduced complicity. It is perhaps surprising how easily you can find yourself engaging with the more obnoxious, scandalous or seamy sides of the internet. I remember browsing threads on 4chan (particularly /b/, the “random” anonymous imageboard) for the first time about 10 years ago, and found myself alternately in awe of and repelled by humanity. It was possible to witness something genuinely creative alongside vitriolic and bullying screeds. It managed to be stupidly funny and shocking and unapologetically misogynistic, which is what you’d expect from a community predominantly comprising teenage boys. It was on 4chan that I first understood memes could be both generative and degenerate; it was one of the few places where the collective id always trumped individual ego.

I never posted on the boards but I soon found myself talking about memes in face-to-face conversations, bewildering others with LOLspeak, sending unsuspecting friends disguised links to shock sites such as Goatse, which featured a confronting close-up photograph of an old man’s horrendously dilated anus – like an extreme version of “rickrolling”, misleading people into clicking on a link to a video of Rick Astley’s ’80s hit “Never Gonna Give You Up”. My other modus operandi was to shock by way of community service: writing “Don’t Google such-and-such” is a great way to get people to do exactly that. Like James, if asked I would struggle to fully explain this behaviour to others. I was hardly a conscientious uni student and had a lot of idle time, but once I started to develop “ambitions”, and mentally calculated the opportunity cost of trawling through 4chan, I lost interest. Behaving regressively as an adult – whether through deliberate acts of antagonism or eating ice-cream for breakfast – is pleasurable precisely because it’s indulgent and shameful. And that’s the beauty of being online: you can violate “real-world” norms while existing within what Amazon CEO Jeff Bezos calls a “regret minimisation framework”.

Some trolls conceive of themselves as iconoclasts. They possess a kind of zeal born from a specific kind of entitlement: the absolute rights of the individual over all else. They often see conspiracy where others might see, say, a functioning state apparatus or a reasonable demand for decorum. Perhaps it’s no coincidence that libertarianism often feels like a kind of trolling. But London-based editor Simon Collinson, an ardent fan of the now-defunct “Fuck You and Die” (FYAD) subforum on the comedy site Something Awful, believes trolling can be a kind of “performative art form”. He describes its culture as “a kind of freewheeling meta-satire”, which has informed countless memes and the absurdist culture of “Weird Twitter”, an unofficial community of absurdist or verbally experimental tweeting. (Sample tweet from former FYAD member @dril: “my godfather died of urine poisoning while cleaning out a mcdonalds playplace tube and that’s why drama makes me upset.”)

In this light, trolling can be a creative enterprise. Sometimes I even welcome it, especially when you consider how corporatised the web has become, dominated by tech oligopolies that turn profits by mining and selling our data. In an attempt to explain the concept of corporate personhood, Mitt Romney once proclaimed that “corporations are people”, but increasingly the inverse is true. People are becoming corporations of their own as they “unlock their value” and work towards “maintaining their personal brand”. A few start-ups in the US even offer “human capital contracts” that permit you to sell stocks in yourself – an IPO for individuals.

Civility is the first casualty of anonymity. But authenticity, or at least the type regarding online identity, is its own form of tyranny. Identifying trolls by an algorithm might make moderation easier, but trolls have a way of returning: they adopt new names, schticks, IP addresses. We won’t see an end to trolling, in one form or another.

Just recently a group calling itself the Assange Shuffle Collective claimed responsibility for defacing a large electronic billboard in the Buckhead neighbourhood of Atlanta, Georgia, with, you guessed it, the picture from Goatse. “We didn’t realise that Buckhead was an incredibly affluent neighbourhood,” a representative wrote on Reddit, “which makes the whole thing terrifically good fun. Burn the rich.” Banning comments is one thing, but how do you universally moderate human instinct? Then came a warning: “Buckhead was far from the only place that had some gaping holes exposed!”