Antony Gerace

In March 2018, editors on Wikipedia English noticed that someone had been making odd changes to an article about Maria Butina, a Russian woman suspected of being an agent of the Kremlin in the US.(She subsequently pleaded guilty to conspiracy against the US.) A user called Caroline456 had deleted incriminating information about Butina and Alexander Torshin, another Russian national accused of trying to funnel Russian money to the Trump campaign through the National Rifle Association.

What set the Wikipedian community’s alarm bells ringing was Caroline456 uploading pictures of Butina that did not appear on the internet. Caroline456 had listed the photos’ copyright status as “own work”, and some users started to think that the editor could be Butina herself. What began as an attempt to source the copyright status of an image escalated into a full-blown investigation, with the photo file’s metadata lending credibility to that hypothesis.


When concerns about Caroline456’s identity were voiced, a group of anonymous editors came to the user’s defence; their IPs were soon linked to the library of the university Butina at which was studying. Everything seemed to point to a deliberate attempt at whitewashing Butina’s Wikipedia article.

The story is the closest thing to a smoking gun regarding Russian state agents infiltrating the free online encyclopedia to spread propaganda. But it also revealed something else: Wikipedia’s resistance to disinformation.

Read next How the golden visa scheme let Russian money pour into the UK How the golden visa scheme let Russian money pour into the UK

Despite massive public pressure after the 2016 US election, most internet giants have failed in their attempts to deal with the the “fake news” phenomenon using mainly technological means – with a sprinkling of human moderation on the side. In early 2018, YouTube’s trending videos tab was still promoting conspiracy theories, while Facebook admitted to be dealing with divisive ads originating in Iran and Russia as late as the US 2018 midterms. Over the last few years, these companies have increasingly been turning to the nonprofit, volunteer community of Wikipedia to stem the spread of falsehoods on their platforms.

Antony Gerace


Facebook, for example, has begun presenting Wikipedia-sourced information about publishers on their pages: every story posted by Breitbart on US Facebook now comes with a description, lifted from Wikipedia, which describes it as an “unreliable source”. YouTube has also announced plans to show snippets from Wikipedia to add context to videos about global warming. Failing to prevent disinformation from entering their platforms, these giants are trying to empower their users by exposing them to Wikipedia-sourced information.

Wikipedia has ended up cleaning the mess made by wealthy tech corporations. Founded in 2001, Wikipedia was chastised in its early days for inaccuracy and vulnerability to informational sabotage. An encyclopedia based on mass participation and devoid of experts and accountability seemed to represent everything wrong with the internet. Eventually, though, the website has emerged as the unlikely champion in the battle against disinformation. How did that happen?

First, Wikipedia’s goal of “organising the sum of all human knowledge” gave its users a sense of social responsibility. The common objective of creating and maintaining an encyclopedia through consensus has succeeded in fostering a community dedicated to the integrity of the project.

Read next Russia blazed a trail for Chinese oligarchs to nab London property Russia blazed a trail for Chinese oligarchs to nab London property

Second, while other platforms are mired in debate over the borders between free speech, propaganda and trolling, Wikipedia has taken a different route from the onset: community-driven fact checking. One of the platform’s three core policies is “verifiability, not truth”, and it requires every claim on Wikipedia be attributed to a reliable source. Any question on the meaning of “truth” is deemed moot: either you have a source for your claims, or you don’t. (Wikipedia editors have even debated whether the claim that the sky is blue needs a citation or not.) The resulting debate is much less politicised than the one taking place on social media. Wikipedia’s community standards have created the conditions for a shared reality.


Finally, Wikipedia’s open-access format – which enables anyone, from academics to enthusiasts, conspiracy theorists and, possibly, Maria Butina, to edit any article – also has the useful side effect of creating a situation of radical transparency. That every single edit, change and discussion happens in the open allows Wikipedia editors to keep one another in check, and weed out nefarious vandals. Facebook and its colleagues, on the other hand, remain a black box, whose inner workings and policies are characterised by incredible opaqueness.

So it should come as no surprise that Caroline456 was caught red-handed. Wikipedia’s tedious and self-enforced editorial process has proved to be a better tool to deal with disinformation than the algorithms, moderators and fact-checkers Silicon Valley giants are relying upon. Looking at how things worked out for them so far, Mark Zuckerberg, Jack Dorsey and the others may want to take a leaf or two out of Wikipedia’s 5,801,130-page book (and that's just the English-language section).

More great stories from WIRED

🐄 How our addiction to big beef ended up ruining the planet

👎 Google's Image search has a massive sexism problem

💰 Here's what Facebook, Google and Tesla pay staff in 2019

👽 The best science fiction books everyone should read


👍 Follow these essential tips to using Trello like a boss

📧 Never miss an awesome story again with our weekly WIRED Weekender newsletter

[newsletter type="recommends"]