It has been little over a year since Donald Trump stunned the world by becoming US president. His election marked a severe upset to conventional wisdom, with his startling use of social media drawing particular attention.

A new nadir came last week, with Trump sharing videos from far-right group Britain First via Twitter. These were also shared by conservative Ann Coulter, one of only 45 people the president follows on Twitter.When asked by the BBC’s Nick Robinson to explain why the president might have retweeted videos from a far-right group, Coulter responded that Trump could not be expected to check the biography of people he retweeted and that “the video is the video, it’s not a faked video”.

This ugly incident perfectly illustrates a deeper problem: the alarming ease with whichsocial media and the internet as a whole can be abused, and used to prop up dubious narratives.

This abuse is an ubiquitous problem, but perhaps one that might have surprised the pioneers of the web. The early days of the internet promised a mind-expanding utopia, where we could freely exchange new ideas and contemplate other points of view. Even in those days of heady optimism, there were already a few academics who worried that this vision pivoted on too high-minded a picture of human nature. In 2017, after a year of revelations involving cyberbullying, troll factories, campaigns of misinformation and more, we should urgently be questioning our use of online space. And to counter these threats we need to examine the greatest one: our own cosy online bubble.

In 1996, MIT researchers Marshall Van Alstyne and Erik Brynjolfsson warned of a potential dark side to our newly interconnected world:

Individuals empowered to screen out material that does not conform to their existing preferences may form virtual cliques, insulate themselves from opposing points of view, and reinforce their biases. Internet users can seek out interactions with like-minded individuals who have similar values, and thus become less likely to trust important decisions to people whose values differ from their own.”

Van Alstyne and Brynjolfsson dubbed this fracturing of the online community Cyberbalkanization. Ominously, they warned that “the loss of shared experiences and values may be harmful to the structure of democratic societies as well as decentralized organizations.”

Their foresight appears to have been uncomfortably close to the mark. An analysis of the 2016 US presidential election by the Columbia Journalism Review noted that “… a right-wing media network anchored around Breitbart developed as a distinct and insulated media system, using social media as a backbone to transmit a hyper-partisan perspective to the world.” The consequence? “This pro-Trump media sphere appears to have not only successfully set the agenda for the conservative media sphere, but also strongly influenced the broader media agenda, in particular coverage of Hillary Clinton.”



Of course, some degree of ideological bias is inescapable, and can hardly be blamed solely on the internet. Newspapers, for example, have always catered to their audience. Nowhere is this clearer than in the UK, which has arguably the most partisan press in the world. But, despite whatever editorial leanings publications may have, a robust legal and – in some cases regulatory – framework places media outlets under compulsion to at least report facts when it comes to news. Whatever the faults of the mainstream media, they do not have carte blanche to concoct fictions, libel or slander.

But what the internet has done is facilitate the emergence of alternative news sites. And here, factual accuracy can no longer be taken for granted. Untethered from journalistic ethics, some outlets thrive by telling their audience precisely what they want to hear. And social media allows the rapid growth and spread of everything from the ludicrous Pizzagate conspiracy theory to rampant climate-change denial – and exists across the political spectrum.

This proliferation of urban myths and conspiracies would perhaps be laughable if it weren’t so uniquely dangerous. An estimated 61% of millennials garner news primarily through social media. But in the process, we trigger algorithms that curate our feeds. These cherry-pick things with which we are likely to agree and jettison information that does not appear to fit our preferences – often at the cost of accuracy and balance. As the Knight Center observed in 2016, “… through social media, professional and other qualified news is mixed with un-checked information and opinions. Rumours and gossip get in the flow.” They also noted this tended to increase political polarisation, and warned: “people may be losing the skills to differentiate information from opinion.”

So why does this happen? Part of the problem is our reliance on internet giants – and their vested interest in rewarding us with what we like to see. Everything from our Google searches to our Facebook news feeds are tailored to keep us engaged and generate profit. But while there is limited evidence that filter bubbles might reduce diversity, the data suggests that we play the lead role in driving our own polarisation. We are much more homogeneous than we think, and tend to interact more with people who echo our beliefs. A recent study in Science found that we tend to engage most with information that flatters our ideological preconceptions, and that this accounted for much more selection bias than algorithmic filtering.

Such findings probably won’t be overly surprising to psychologists, who have long been aware of the human tendency towards confirmation bias. But such polarisation has consequences far beyond politics – it has alarming implications for science, and our collective wellbeing. For example, , climate-change denial is strongly linked to political belief. Yet despite the overwhelming evidence of anthropogenic climate change, the proliferation of outlets publishing claims attempting to counter the scientific consensus means those unwilling to face reality have no shortage of media sources to bolster theirview – to our collective detriment.

These divisions run deep, creating walled communities that reinforce their own beliefs in a feedback loop. A 2015 study in PNAS found that misinformation flourished online, because users “… aggregate in communities of interest, which causes reinforcement and fosters confirmation bias, segregation, and polarisation”. These online echo chambers cement dubious notions, giving them an air of legitimacy and fuel increasing separation from reality.

Targeted individuals (TIS), for example, believe their every action is being shadowed by some sinister collective, convening online to discuss it. Many claim to hear sinister voices in their head, suggesting delusional disorders might be at play, a view supported by research to date. Yet on forums for those affected, the strongly-enforced message that those suggesting a psychological cause are agents of deception. As psychologist Dr Lorraine Sheridan laments, “there are no counter sites that try and convince targeted individuals that they are delusional. They end up in a closed-ideology echo chamber.” With victims discouraged from getting help, tragic consequences can ensue. In 2014, Myron May uploaded a video to YouTube outlining his agony as a TI, hours before opening fire at Florida state University, dying in a shoot-out with police.

Echo chambers abound for many other conditions which are not medically recognised, from chronic Lyme disease to electromagnetic hypersensitivity. But perhaps most worrisome is the advance of anti-vaccine narratives across the web. The explosion of dubious sources has allowed them to propagate wildly, undeterred by debunking in the popular press. We might take the current drastic fall in HPV vaccine uptake in Ireland, driven by anti-vaccine groups like REGRET, despite its life-saving efficacy. While organisations including the Health Service Executive have valiantly tried to counter these myths, these claims are perpetuated across social media with little to stop them.

It doesn’t have to be this way. The echo chamber may be comforting, but ultimately it locks us into perpetual tribalism, and does tangible damage to our understanding. To counteract this, we need to become more discerning at analysing our sources – something we are currently poor at doing. More difficult perhaps is that we must learn not to cling to something solely because it chimes with our beliefs, and be willing to jettison any notion when it is contradicted by evidence – no matter how comforting the disproven idea may be. As the great physicist Richard Feynman once observed, we ourselves are “the easiest person to fool”. This adage should never be far from our minds in our interconnected world. From the dying embers of 2017, we must resolve to make 2018 the year of questioning not only our opponents’ sources, but our own.