Thanks to the internet, the marketplace of ideas is more open and democratized than ever before. Thanks to social platforms, it’s also been rigged.

Erin Gallagher A hashtag analysis by multimedia artist Erin Gallagher.

Brad Parscale, the former digital director of the Trump campaign, was recently asked about his retweet of @TEN_GOP, a Twitter account that appeared to be the Republican party of Tennessee but turned out to be run by a famed Russian troll factory. “Yes, I feel bad that it was a — it was not a Tennessee account — that I got fooled that it was a Tennessee GOP account,” Parscale said. Here’s the message he retweeted: “Thousands of deplorables chanting to the media: ‘Tell The Truth!’ RT if you are also done w/ biased Media!” Forgive Parscale for not seeing the Russian hand behind the account. Many news outlets and others amplified that account’s messages, sometimes in support of them, or to debunk them, or to cite them as reflective of the views of some Americans. The last point is one of the most important to reckon with. That account, as well as the many others identified by congressional investigations on Twitter, Facebook, and Instagram, blended in with the flow of content from Trump supporters. The accounts didn’t set the agenda, but rather amplified and mimicked what was already being shared, stacking more wood atop an existing bonfire of partisanship and social division. And it wasn’t just pro-Trump content. For example, can you tell which of these Bernie Sanders posts is from a Russian troll page, and which is American?

Facebook

Or how about the similarity of these posts about Bill Clinton?

The Russian effort exploited one of the great promises of social platforms — a level playing field — to blend in with other content being pushed out during and after the election. Russian propaganda mixed with an avalanche of hyperpartisan political content, which itself inspired fabricated news stories from fake news publishers, which were in turn copied and pushed out by hundreds of young Macedonian spammers. These messages, stories, and memes traveled in the very same containers and pathways as their legitimate counterparts, across platforms like Twitter and Facebook. These platforms blur the lines between people, entities, and types of content. Accounts can be people or companies or governments. Multiple Facebook pages or Twitter accounts can be run by the same people, but you’d never know to look at them. A tweet or Facebook post can be turned into an ad, which can then accrue additional reach thanks to people engaging with it in a genuine way. Everyone is here and anyone can be anything! Fittingly, it brings to mind the title of Peter Pomerantsev’s book: “Nothing Is True and Everything Is Possible: The Surreal Heart of the New Russia."

It's not just about real vs not real. It's about a "flat space" where art/people/cities/businesses all same account… https://t.co/hqF1Ic1aKm

If you can’t tell whether a Facebook or Twitter account is run by an American, a Macedonian spammer, or a Russian troll, then that’s great news for the Macedonians and Russians, or others seeking to push false information on social media. The fact that so much attention could be harvested on social media by fostering division, confusion, and conflict speaks volumes about American politics and society — but also about these massive platforms that cloak themselves in the values and talk of liberal democracies.

One of the unintended consequences of the so-called “flattening” effect of platforms is that, by ostensibly putting everyone on the same level, you empower those who become experts at gaming the system. By democratizing media on platforms that reward pure attention capture, you enable manipulation on a profound scale. Thanks to the internet, the marketplace of ideas is more open and more democratized than ever before. Yet thanks to social platforms, it’s also been rigged to reward those who can manipulate human emotion and cognition to trigger the almighty algorithms that pick winners and losers. “Whatever piece of content, however brilliant or vile, that received an escalating chain reaction of user engagement would receive instantaneous, worldwide distribution,” wrote former Facebook manager Antonio García Martínez. Of course, the previous system wasn’t perfect, either. Prior to the internet, and especially before social platforms, the media was dominated by large entities who operated massive production and distribution systems that were required to gain reach: satellites, transmitters, printing presses, etc. A relatively small number of people and companies dictated the news and information available. These entities still have advantages on social platforms: They can more easily attract followers thanks to an established brand, and they’re often given a leg up in the form of verified accounts and partnerships with the platforms themselves. But in the end, they still have to go toe-to-toe with the Macedonian publishers who don’t care if a story is true, just that it performs well on Facebook. They have to compete with Russian information operations that have budget to spend, dedicated trolls working around-the-clock shifts (just like in a newsroom), and social divisions to mine. Layer in algorithmic filtering that promotes the content that generates the most engagement and you have what New York Times media writer John Herrman refers to as “The Big Huge Black Box Attention Market.”

Twitter

A shift in the market But now, as a result of the effectiveness of these very same trolls and spammers, we are seeing the early stages of a major shift in the attention market away from the level playing field concept platforms have always espoused. Instead of leaving it to opaque algorithms to determine what gets more attention, Facebook, Google, and Twitter are now more publicly putting their thumbs on the scale. Facebook, Twitter, and Google are highlighting “trust symbols” in news articles to signal to readers which outlets may be more worth your attention. This follows other attempts to show more contextual information for links, and to flag and algorithmically downgrade content that is deemed false by fact checkers. Twitter is rebooting its account verification program, and in the process taking the check mark away from white nationalists, while warning that more action is to come. (Which naturally means more questions about its actions. For example, Twitter recently removed the check mark from an NBA player's account.) Google is going to be “carefully curating” the results that show up in its “Top Stories” section after misinformation repeatedly made its way into those coveted spots during recent breaking news events. Google’s YouTube is also cleaning up its search and recommendation results for news events after similar failures.

Facebook An example of contextual information about a news story that is now being shown by Facebook.