As I have a particular interest in microtargeting in politics, I’ve been throwing myself into the Cambridge Analytica news cycle for the last few weeks.

It seems like some sort of breaking point, both for the cycle of peculiar elections in Western democracies and the rise and rise of tech giants. It’s been hard to disengage. But then I read a piece about the fear of mass media manipulation and how it comes with each forwards step in communication technology. And that made me look for some perspective on this scandal.

First of all, let’s address the issue of effectiveness.

How influential were Cambridge Analytica?

Leading political communication scholar Eitan Hersh dismissed the overblown claims of microtargeting in a book called “Hacking the Electorate”, which I referenced last week. In it, he looked at the big data enabled efforts of US politicians in the 2012 election. It concluded that campaigns were generally bad at targeting the right people with the right content, and there was little to no evidence that social media focused microtargeting efforts had any measurable effect on voters’ opinions.

When the Cambridge Analytica revelations came out, he declared that there was no need to update those findings — the company’s claims of efficacy were simply sales techniques backed by bad science. And it seems like just about every social scientist out there agrees with him.

The fundamental reason social scientists are so skeptical is that the basis of the Cambridge Analytica method is psychological profiling based on personality types. Multiple studies have shown that personality is not a powerful predictor of political opinions. More than that, actually trying to influence people based on personality hasn’t been shown to work before, even for everyday products. Political choices are much more complicated because they are related to people’s identities, families and culture.

There is some evidence that personal information can be used to serve you more engaging online adverts. But there is no evidence that those adverts can change voting behaviour. Meta-analysis shows that people are slightly persuadable when it comes to ballot measures and primary campaigns. But when it comes to a general election, voting intention can’t really be changed. And that’s why election prediction models are largely based on underlying factors — how the economy is doing or what the job market looks like — rather than on the activities of political campaigns.

One of the reasons people are so skeptical of Cambridge Analytica is that they didn’t just work on Trump’s campaign. They supported both Ted Cruz and Ben Carson in the primaries before they were involved with Trump, whose non micotargeted campaign trounced all comers in the Republican primaries. As William Davies wrote in the London Review of Books, “if Clinton had won, [Cambridge Analytica] wouldn’t be a story.”

One thing that isn’t clear, because there is no evidence either way, is whether microtargeting increases participation by those already right-wing and thus impacting the overall result. Trump’s victory was noted for a rallying of working-class support behind Trump (a group that is traditionally less likely to vote), a drop in BAME votes who traditionally support Democrats. This may have been affected by microtargeting — but we have absolutely no proof that it did.

The wider research on the world of fake news and widespread online misinformation is thin. There is, as yet, no consensus even on the basic terminology. Ask ten researchers what an ‘echo chamber’ is and you’ll get twenty different answers. The same goes for ‘fake news’. Is it state sanctioned propaganda or inaccurate stories created by overworked journalists for the 24⁄ 7 news cycle or citizen published alternative news and conspiracy theories? In common usage, all of those definitions are forced under one term.

Even the definition of ‘online political conversation’ is disputed. What isn’t political, in the end, when so much of our identities is now regarded as political?

Researchers can’t define the phenomena they’re studying, let alone agree on the causal impact of any of it. Cambridge Analytica might have been influential, but nobody has figured out a way to measure that influence. Their claims are at best entirely speculative.

The good news? We’ve been through this before

The second dose of perspective comes from our guest this week, Heidi Tworek, Assistant Professor at the University of British Columbia. (She’s the one who wrote that article on the history of mass media manipulation).

People have indulged in panics about communications technology for centuries. It happened with television. It happened with radio. It happened with the telegram and newspapers and the printing press. The idea that a small cabal of savvy manipulators can use new, barely understood technology to brainwash the general population is one of those patterns that crops up over and over.

Fears of mass manipulation by new media are as old as mass media themselves. Almost every expansion of media or new media technology provoked paranoia about the contagious emotions of “the masses.” The history of these recurrent claims should make us more skeptical of apocalyptic visions of how psychometric targeting will mislead the crowd.

One of the unique aspects of this mass panic is that social media has opened up the possibility of the wider population quoting, sharing and involving non-human opinions on the issues of the day. When mainstream media tries to include the voice of the people, they now do it through tweets, as Tworek wrote in the Columbia Journalism Review:

But using ordinary voices from Twitter can easily backfire. A recent study by two researchers at the University of Madison-Wisconsin found that 32 out of 33 major American news organizations had embedded tweets created by the Internet Research Agency, an organization located in St. Petersburg and backed by Russians linked to Vladimir Putin. This included outlets ranging from NPR and The Washington Post to digital natives like BuzzFeed and Salon.

Individual tweets and entire identities can be faked with unprecedented ease. Targeted disinformation can be authored by individuals looking to prank the wider public (remember Lulzsec?) or by state actors looking to undermine democratic stability in rival countries. As the EU Commission recommended the other week, the short answer is that society and the media need to update norms to fit the new reality.

In that, nothing is new. The same adjustment period has been necessary for every leap forwards, just as some voices will react by jumping to paranoid visions of mass manipulation. It’s worth remembering what we gain with new media technologies: more plurality of voices, the diffusion of media ownership, and the lowering of costs for both production and consumption of information.

Microtargeting in politics is the price to pay for all that and, if you ask most social scientists, the actual costs are much lower than people fear. If we ever find that it works, really works, then we can regulate against it. We can adjust. We’ve done it before. Over and over and over again.

You know the drill:

Follow us on FB – www.facebook.com/connectedanddisaffected/

– www.facebook.com/connectedanddisaffected/ Follow us on Twitter – twitter.com/CandDPodcast

– twitter.com/CandDPodcast Subscribe and leave us a review on iTunes – itunes.apple.com/us/podcast/connected-disaffected/