You can see in the network map above a view of how the headline and Tweet spread — from Vox, to Max’s network, to Ezra Klein’s network and beyond. Note how effectively this moved to several, connected but unrelated clusters or networks of people. This image is a network finger print of an effective media hack. Concurrent with this amplification across networks derivative articles start getting authored and the part human, part algorithmic machine we have created of click rich re-syndication starts up.

The following day the Washington Post ran an article saying “this makes no sense” but it’s was too late — once these headlines move around the web, with sharable rich headlines the truth rarely catches up. As Churchill said a while before the real time web:

“a lie gets halfway around the world before the truth has a chance to get its pants on.”

The meme was set and already feeding the Google algorithm. Days later — months later — a simple web search offers a prompt that is in essence based on propaganda achieved through a deft media hack. Let me pass the keyboard over to Gilad to tell the story of an even more bizarre hack that tool place on September 11th.

Columbian Chemicals Hack by Gilad Lotan

On September 11th 2014, a fascinating media hack began surfacing online. A hack so intricately designed, it is clear that someone put a lot of effort into planning, seeding and trying to spread the rumor, using multiple services, including Wikipedia, YouTube, Facebook and Twitter. We do know that many of the profiles involved, especially on Twitter, are of Russian origin. We still have no idea who specifically was behind the attempt.

The hoax claimed that a chemical factory in Centerville, Louisiana had exploded and was leaking hazardous chemicals everywhere. This began spreading, initially through text message alerts received by citizens of a neighboring town, and then around the web. The first Google search result, returned a fake wikipedia page (now deleted) tied to this supposed explosion. The page linked to a 30-second YouTube video where a camera was pointed towards a TV screen showing a fuming building along with ISIS fighters reading a message. Additionally, a Facebook page of a fake media outlet named ‘Louisiana News’ published a statement claiming that ISIS takes responsibility for the explosion in Centerville. And on Twitter, a full-blown tweet storm emerged, reaching peak velocity of one tweet per second, using a number of hashtags — #DeadHorse, #ColumbianChemicalsInNewOrleans, #ChemicalAccidentLouisiana, #LouisianaExplosion — eventually converging into a single hashtag: #ColumbianChemicals. During the peak of this campaign, a photoshopped screenshot of the CNN homepage with an article titled “Plant Explosion in Centerville Caused Panic” surfaced in tweets.

For many reasons, which we’ll outline below, the rumor did not spread far. Even though it was carefully planned, and seeded across different platforms, the content generated did not gain enough user trust, and hence no network effects were triggered. Especially in social networked spaces where authority and trust are so closely tied to the social graph, it has become increasingly difficult to manufacture a fake or spammy account without instantly raising red flags: why is no-one else connected to this person? Why is the account so new?

Let’s take a look at the different places where this hoax was seeded.

Wikipedia

A wikipedia user by the name of AmandaGray91 created the original entry, claiming the fake explosion was caused by a terrorist attack, linking to a YouTube video as well as Wikipedia’s list of disasters in the United States by death toll. Someone certainly put thought into this. Since Wikipedia saves full history and all edits, we can see what else this user did on Wikipedia before the hoax — editing pages on Alexander Asov, an “author of books in Russian pseudo history,” Aditya Birla Group, an Indian multinational conglomerate, and owner of Columbian Chemicals Co; and added punctuation to the page on Carbon black, manufactured in the Louisiana plant.

Wikipedia editors are a global community that has very clear rules of conduct as well as an internal authority rank. As a completely new Wikipedia editor, it is very difficult to simply add a page, especially one depicting an ISIS terror attack on US territory, and expect it to stick around for long. The page was taken down quite rapidly, as users who were led to it from tweets flagged it as potentially problematic.

Facebook

Someone clearly took the time and effort to build up a fake public Facebook page of a non-existent media outlet called ‘Louisiana News,’ seeding it with content. From August 22nd onwards, the page was frequently updated with posts highlighting news events across various topics such as politics, sports and entertainment. The page gained some 6,000 likes, and is still openly accessible. Its last entry reads: “#Breaking news, claiming that ISIS takes responsibility for the explosion in Centerville, LA”, and has hundreds of likes, most likely associated with automated accounts.

Facebook’s EdgeRank algorithm that decides which piece of content should be displayed on user timelines is known to take into account engagement (likes and comments) on posts. Hence it is clear why someone would try to manufacture likes for a specific post. That said, even with higher EdgeRank, the post will only be visible in timelines of users who already liked or followed the page. In order to get follows, one needs to establish trust — get real users to pay attention to your content, and ask to receive more of it. That is not an easily game-able task, and effectively the reason why this Facebook campaign did not actually spread.

Twitter

Over the years, we’ve seen a lot of odd things unfold on Twitter, and we must say that this hoax was one the weirdest ones we’d ever seen. The Twitter operation for this hoax began on September 10th, a day before the “supposed explosion,” when thousands of Russian twitter handles participated in a Tweetstorm getting the #DeadHorse hashtag to trend across a number of Russian cities — eventually even in Moscow and Saint Petersburg. A Tweetstorm is a sudden spike in activity surrounding a certain topic or hashtag. In this case, the Russian twitter handles all of a sudden posted tweets with the #DeadHorse hashtag.

The next day, on September 11th, a completely different group of Twitter handles, predominantly posting in English, started using the #DeadHorse hashtag along with #ColumbianChemicals within the same tweets. For example — @JaneneAngle — a clearly automated account created on Sept 8th, seriously ramped up activity on September 11th, posting angry text on double standards and corrupt governments, never posting anything afterwards. There were similar examples from @MiaBrowwnn whose account was also created on Sept. 8th, and started posting to the same hashtags on Sept. 11th, never tweeting anything afterwards. Here’s an image that was at one point Retweeted by hundreds of these automated accounts:

If we look at this twitter handle, @RebekahBENNET, we see it continuing to post content well after the hoax, and on one day in October, all of a sudden using a new hashtag — #MaterialEvidence. This is common behavior for online bots, which may switch focus to promoting a new campaign.

Perhaps the most revealing clue that this activity is caused by automated accounts is the source field attributed to most of these tweets. Typically we see ‘Twitter of iOS’ for those using the iOS Twitter app, or ‘Twitter for Android,’ or ‘Tweetdeck.’ But in this case, we see many tweets coming from sources labeled — ‘masss post,’ ‘masss post2’ — demonstrating a high likelihood that some broadcast automation tool was used.

Even with all this activity, the #ColumbianChemicals hashtag was not catching on, and was certainly not trending anywhere. So the master puppeteer ramped up the volume, and all of a sudden, a large number of Russian handles started posting to the hashtag. It looked something like this:

Even with this heightened level of activity, the hashtag was still not trending anywhere.

Social Network Structure

When we look at the network structure (who follows whom) of all the Twitter handles posting to the hashtag, we see two distinct groups emerge:

1. Several groups of Russian bots, some of which are still active to this day. Many of these accounts had been active for a while, many months before the hoax, and certainly don’t seem like they’re automated. They all publish images, conversational text, and links every once in a while. What’s so fascinating is that on September 11th, they all posted one or two tweets in English to the #ColumbianChemicals hashtag, and then went back to their typical activity, as if seeded within the network, and activated all of a sudden, only to fall back into hiding. Here are a few examples: @Galtaca, @Kiborian, @GelmutKol.

2. Accounts connected to The Times-Picayune and its Internet sister site NOLA.com, posting information such as this, debunking the hoax.

There’s a very important lesson learned here, crystallized by the network graph to the left. No matter how much volume, how many tweets, or Facebook likes a campaign generates, if the messages aren’t embedded within existing networks of information flow, it will be very difficult for information to actually propagate. In the case of this hoax on Twitter, the malicious accounts are situated within a completely different network. So unless they attain follows from “real accounts,” they can scream as loud as they’d like, still no one will hear them. One way to bypass this is by getting your topic to trend on Twitter, increasing visibility significantly.

Social networked spaces make it increasingly difficult for a bot or malicious account to look like a real person’s account. While a profile may look convincingly real — having a valid profile picture, posting human readable texts, and sharing interesting content — it is hard for them to fake their location within the network; it is hard to get real users to follow them. We can clearly see this in the image above: the community of Russian bots are completely disconnected from any other user interacting with the hashtag.

The same principle holds for Wikipedia, which is even harder to game as it is easy to identify those accounts who are not really connected to the larger editing community. The more time you spend making relevant edits and the more trusted your account becomes the more authority you gain. One can’t simply expect to appear, make minor edits on three pages, and then put up a page detailing a terror act without seeming suspicious.

As our information landscapes evolve over time, we’ll see more examples of ways in which people abuse and game these systems for the purpose of giving visibility and attention to their chosen topic. Yet as more of our information propagation mechanisms are embedded within networks, it will become harder for malicious and automated accounts to operate in disguise. Whoever ran this hoax was extremely thorough, yet still unable to hack the network and embed the hoax within a pre-existing community of real users.

No impact — other than a fascinating story!

So what?

Politicians, brands and corporations have been hacking media, trying to manipulate and reframe narratives to their end since the beginning of time. If you go back to the 1920’s and the work of Edward Bernays — the father of PR — came up with the term Public Relations because he thought that the term propaganda was overtly associated to the German WW1 war machine. Bernays would be fascinated by these hacks and by our media landscape today. But in a sense manipulation of message is something we are all familiar with today. Every message we put into the social internet has a grain of optimization in it. And one person’s optimization is another person’s propaganda. So why does this matter? Are these examples, fun as they are, just another set of data points in a long tail of corporate and political manipulation? Let’s pull on a few of these threads - it feels like something different is going on here..