In an alternative telling, data was still culled from Facebook and used to microtarget American voters (arguably major problems in and of themselves), but the effectiveness of this strategy is called into question. This more skeptical perspective has been articulated by quantitative social scientists such as Dean Eckles and Brendan Nyhan, and political communications scholars such as Daniel Kreiss, who consider this kind of psychometric targeting used for political purpose to be more “snake oil” than science.

Underlying these two versions of the role of Cambridge Analytica in the 2016 Brexit referendum and US presidential election is the need for a more nuanced conversation around the true impact of microtargeting on political behaviour and what we (governments, political parties, platform companies, citizens) should be doing about it.

This debate was re-ignited last week with the publication of a study in the Proceedings of the National Academy of Sciences (PNAS), which analyzed the Russian Internet Research Agency’s troll activity on Twitter and found no evidence that it “significantly influenced ideology, opinions about social policy issues, attitudes of partisans toward each other, or patterns of political following on Twitter.” What’s more, the article’s authors argue that one reason for this was that the troll accounts were predominantly interacting with people who were already highly polarized. Their conclusion is therefore counterintuitive: online echo chambers may have served as a containment mechanism for trolling content and not actually changed anyone’s views at all.

This study piqued my interest because over the course of the recent Canadian federal election, I directed a large-scale online media monitoring and survey project studying the spread and impact of disinformation on the behaviour of voters. Our team published seven reports during the election campaign, and our final analysis will be published in January. One of the phenomena we observed aligns with the findings of PNAS’s new study. In short, the presence of clearly defined echo chambers in the Canadian online conversation may have inoculated wider communities from the spread of disinformation and false content, and this content may have mostly been seen by those already predisposed to the message. We think the result is that this content likely didn’t change the voting behaviour of citizens.

But here is where the debate gets more complicated. As the results of isolated surveys and studies of discrete social media exposure, these findings may be sound. But they don’t paint the full picture. In response to the PNAS study, a number of scholars and researchers, such as Siva Vaidhyanathan, Johan Farkas and Renee DiResta, and journalists, such as Caroline Orr, have pointed out that researchers are only able to access a very limited scope of the media a person is exposed to, over a specific period of time, and can only capture a very limited type of behavioural shifts. Researchers also rely on variations of what communications scholars call the “hypodermic needle” theory of media influence: namely, the belief that media can be ‘injected’ into the minds of passive audiences. Perhaps more importantly, such studies might miss the actual intent of disinformation campaigns: to divide, inflame, engage and entrench pre-existing biases and polarizing beliefs.