In my exploration of “fake news”, I’ve found some troubling things. And it’s not just the rightwing news network that’s worrying. I’ve recently gone back and taken a preliminary look at the leftwing media ecosystem, trying to map the hyperlinks between these sites – so I’m not trying to establish causation or assign blame as to what kinds of content these sites circulate. There are plenty of other people willing to do that. What I’m really looking for is a way forward.

I’m primarily interested in the larger network that has enabled fake news to become such a salient topic. What I’ve found most troubling about fake news so far isn’t the factual errors, the misinformation, or the propaganda involved. It’s not the politics either . And no, it’s not Trump.

What’s scary about fake news is how it is becoming a catch-all phrase for anything people happen to disagree with. In this regard, fake news is sort of the stepbrother of “post-fact” and “post-truth” – though not directly related, they’re all part of the same dysfunctional family.

Platforms such as Facebook and Twitter have been accused of being responsible for the result of the US election, the Brexit referendum outcome or events such as Pizzagate – which led Hillary Clinton this week to describe fake news as “a danger that must be addressed”. The worst part of this debate has been obscured by politics-as-usual, techno-dystopian Fahrenheit 451 tropes – and to some degree, more misinformation.

Reality bytes

Did the sequence of events leading up to the 4 December #Pizzagate incident in Washington, DC mark the point when fake news became real? I think not. Fake news has been real since we’ve had the capability to communicate language and tell stories. It’s an unfortunate reality that news reporting is often at odds with the interest trifecta of politics, profits, and public opinion.

What’s changed is the internet, which has altered the scale of the fake news problem, taking it to another level. While fake news might have been less visible in the past, it has always been with us. Where we might find Twitter bots today, we’ll find AI-powered virtual assistants and ubiquitous natural language interfaces (ie, Alexa, Siri, and Google Home) tomorrow.

Fake news will be our virtual friend

In some ways, we’ve already arrived. Is it fake news when Google Maps fails to provide us with the fastest route to a destination? Do we cry “fake news” when a deceptive review on Amazon influences our decision to buy an inferior product? What about when we go back after a negative experience and discover biased reviews on Yelp?

Fake news is more about what we can confirm as real than what we can identify as fake. News is the fabric that weaves together our realities, and Google, Facebook, Twitter – through always-on phone screens, activity trackers, and 24/7 GPS and indoor Bluetooth trails – represents our interface with this brave new world.

As global technology companies move forward with solutions to protect us – and their advertising revenue – from the scourge that is fake news, they must ensure that the smaller, less visible, alternative news outlets are not caught in their operational cleansing.

Independent media that seek to distribute their own news content are already challenged by premium content delivery systems such as Facebook’s Instant, 360 (Video), and Google’s AMP. The industry’s filtering response to fake news could signal the end of legitimate news outlets that make an effort to draw attention to issues they feel are underrepresented or intentionally suppressed by the mainstream media.

The new(s) pornographers

Fake news is a lot like pornography – especially in terms of how gatekeepers classify certain content (and known sources of content) they deem unsuitable for their audiences. Take, for instance, the Pulitzer prizewinning Vietnam war photo removed from Facebook. If a combination of human and machine detection has difficulty differentiating between child pornography and Vietnam war images, wait until we start pre-filtering (ie, preferentially censoring) news based on issue-based framing and community self-reporting.

Fake news has certainly been attracting attention, including that of national policymakers. Marsha Blackburn, an American congresswoman, has gone so far as to imply that internet service providers should be held responsible for taking down fake news, saying: “If anyone is putting fake news out there, the ISPs have the obligation to in some way get that off the web.”

“In some way” are the key terms here, but to be fair, Blackburn also suggested that it’s time for platforms such as Facebook to look into having human editors and we know how that’s been going recently.

Yet hiring an editorial team to moderate content is in direct opposition to the hands-off algorithmic meta-business models of most online companies. Why? Because they primarily sell people’s attention. Facebook has emphasised that it is not – and never plans to become – a media company.

Is there a practical solution to fake news? I can’t say. But I can see where we might be headed: the suppression of alternative voices and the censorship of content that addresses certain issues.

In the 2016 infowars, if we aren’t vigilant, the result of fake news is likely to be yet another layer of filtering. And this time around, the filters won’t be to segment audiences for advertising purposes or to target voting electorates; it won’t be to display the news articles, “likes” and intra-thread @replies that algorithms think we want to see first.

The filters in the future won’t be programmed to ban pornographic content, or prevent user harassment and abuse. The next era of the infowars is likely to result in the most pervasive filter yet: it’s likely to normalise the weeding out of viewpoints that are in conflict with established interests.

This isn’t a just problem limited to the centre, the left, or the right. Rather, this is a new reality. So, as everyone barricades themselves further into algorithmic information silos, encrypted messaging services, and invite-only social network sites, it’s at least worth a thought. In the coming decade, Al-powered smart filters developed by technology companies will weigh the legitimacy of information before audiences ever get a chance to determine it for themselves.