Republican Party strategists were reportedly also anything but impressed when CA came to them at the beginning of the presidential campaign. Mike Murphy, head of the influential Republican PAC “Right to Rise”, for instance, complained in an article for the LA Times that “they were just throwing jargon around,” while Luke Thompson, vice president for politics and advocacy at the Republican analytics firm Applecart, is quoted in the same article with the statement that ’s claim that to have reinvented political persuasion was based on “ludicrous assumptions and leaps of faith.”

According to Murphy and other experts, CA had very limited knowledge of the fundamentals of the American election campaigns and had nothing to show that would have been truly revolutionary. Many of those who have worked with CA on the conservative side in the past are apparently astounded by the claims made. The LA Times further reports that a conservative organisation had even issued a warning against CA (then still known as SCL Group). The contents? In a nutshell: “Attention, snake oil salesmen!”

© Channel4/Screenshot

Even the juicy details from Channel4’s undercover investigation raise doubts about the success of the allegedly superior data analysis and targeting. In the secretly recorded meetings, the company’s top brass presented CA above all as a company that investigates opponents and tries to defame them with dubious material — dirty propaganda work, rather than data magic. CEO Alexander Nix and managing director of CA Political Global, Mark Turnbull, boasted of injecting propaganda “into the bloodstream to the internet” and described how their services could include bribing politicians by recording them accepting bribes or sending “very beautiful” Ukrainian “girls” to entrap them. A company that really believes in its ability to persuade voters with “Big Data Analytics” one should assume would not have to deal with such dubious tricks.

What The Evidence Says

Statements by political experts are one thing, but they are by far not the only ones doubting the supposed effectiveness of CA. One of them is Daniel Kreiss. He is a professor at the University of North Carolina at Chapel Hill and considered one of the leading experts on data-driven campaigning. “There are many reasons to be skeptical,” Kreiss says when I approach him on Twitter about Cambridge Analytica. “There is little research evidence that psychometric targeting is effective in politics and lots of theoretical expectations that it would not be.”

To understand this statement, it is worth taking a quick look at the exact method Cambridge Analytica claims to use. According to Cambridge Analytica, they were able to build psychographic profiles for millions of Americans: profiles that are a combination of demographic factors such as age, gender and location and character traits such as openness, conscientiousness, extraversion, agreeableness and neuroticism — the characteristics of the classic “Big Five” model. Political inclinations were then assigned to different varieties of the characteristics.

© Wikimedia

The character profiles of the allegedly 320 000 (the numbers vary between 200 000, 270 000 and 320 000) original users of the personality test app developed by Cambridge scientist Alexander Kogan — through which Cambridge Analytica had gotten the Facebook data — may be reasonably accurate. This, however, can no longer be said for the remaining 50 million profiles (Kogan himself speaks of 30 million) which CA amassed by hoovering up data from the friends of app users.

For them, character traits were merely modelled, reverse-engineered so to speak, using Facebook likes, true to the motto: “Johnny Walker, a personality test app user, has profile A and likes X and Y. Accordingly, his Facebook friend Mary Mueller, who also likes X and Y, probably also has profile A.”

As studies show, even this first step is based on shaky assumptions and of limited value over time as preferences change. The subsequent linking of these profiles with political tendencies is, however, even less precise — from what we know the “Big Five” can only predict about 5 percent of the variation in individuals’ political orientations. Or as Antonio García Martínez writes in WIRED: “It’s making two predictive leaps to arrive at a voter target: guessing about individual political inclinations based on rather metaphysical properties like “conscientiousness;“ and producing what sort of Facebook user behaviours are also common among people with that same psychological quality. It’s two noisy predictors chained together.”

“There is little research evidence that psychometric targeting is effective in politics and lots of theoretical expectations that it would not be.”

For Jessica Baldwin-Philippi, a professor at Fordham University, who has studied data-driven campaigning in recent years, it’s a reason to remain sceptical. “No one really knows how effective the vast majority of campaign tactics are, and these specific tactics seem significantly untested.” According to Baldwin-Philippi, one could model a ton of ‘personality traits,’ and “maybe they are generally correct,” but it was unclear how, for instance, Facebook “Likes” are reliably associated with character traits, and how these variables, in turn, can be associated with certain messaging strategies and political attitudes. “All of those steps multiply the difficulty to know. It’s broadly been the case that the most productive things to target on are publicly available prior data — voting data and census.” Psychographics? Nothing more than a marketing term.

And Kreis and Baldwin-Philippi are by no means the only ones who are not convinced by CA’s claims. Rasmus Kleis Nielsen, Professor of Political Communication and Director of Research at the Reuters Institute for the Study of Journalism at Oxford University, finds even clearer words: “Cambridge Analytica is a private, for-profit company selling consultancy services, and it is absurd to accept their self-interested claims as evidence of their efficiency,” he explains by email. According to Kleis Nielsen, various forms of microtargeting are useful for campaigns, but the effect should not be overestimated — a finding supported by independent research. “It is not a silver bullet and not a decisive factor in electoral outcomes.”

Results by county of the United States presidential election, 2016 — © Wikimedia

Apart from the scientific questionability of psychographic targeting, it is generally extremely difficult to influence people and change their political opinions, especially with microtargeting — a fact that researchers have stressed repeatedly in recent years. “Political persuasion is really difficult because so much of the [US] electorate is already sorted into partisan camps,” explains Kreiss. It’s a separation that runs so deep that it is very difficult to influence their opinion — no matter how much effort you put into it. And: How people cast their votes also depends, for example, on how they see the current economic situation, their educational background and much more. Political advertising has comparatively little influence in this context, as various studies have shown (example one and example two).

“Cambridge Analytica is a private, for-profit company selling consultancy services, and it is absurd to accept their self-interested claims as evidence of their efficiency.”

Finally, in what should put the last nail into Cambridge Analytica’s coffin, we should remind ourselves of how difficult it is to convince people of anything, in particular with advertising. The old hypodermic needle model — a model of communication suggesting that an intended message is directly received and wholly accepted by the receiver — has long been debunked. Instead, as communication researcher W. Russell Neuman argues in his seminal book “The Digital Difference”, communication is fundamentally polysemic: We do not do not necessarily understand messages, adverts or stories — even if they are tailored to our worldview or personalities — in the way the sender wants us to understand them.

On the contrary, any “media message, intended to be persuasive or otherwise, is not likely to stimulate a singular response, but rather a distribution of responses across a population of those who have encountered the message.” Or to put it more clearly: Just because a micro-targeted message wants voter Molly Average to stay away from the ballot box, it does not mean that Molly Average will understand this intention of the message (or understand and recognise it at all — as Neuman finds, we are surprisingly good at ignoring advertising and propaganda). Even if Cambridge Analytica was as effective as it claimed in targeting individuals with tailor-made content that doesn’t mean that it had any significant effect.

Ultimately, Neuman’s arguments also hint at a more fundamental problem in the entire debate around CA’s methods: We all too quickly portray humans as “sheeple” which are easily misled, especially when they’ve made decisions that markedly differ from our own or if we didn’t like the outcome of an election (the famous third-person effect). But most audience members are far from being the gullible and isolated pawns we claim them to be. To claim the contrary would be to deny them any agency of their own.

Why We Like To Blame Technology

If there is so little evidence that Cambridge Analytica’s methods have had any actual effect, why then are we still having this discussion? Just as everything else in this complex story, the answer boils down to a range of reasons.

First of all, we seem to have a tendency to panic over the effects of technology on us. As Heidi Tworek, a professor of history at the University of British Columbia has argued from a historical perspective, the fear of mass manipulation by new media and technologies is almost as old as humanity itself. Similarly, the fear of an “influence machine” that will corrupt us is hardly new. For some reason, panic about the ramifications of technology seems to be hard-wired into our systems.

TV was once seen as manipulating the masses — © Wikimedia

Then there is the economic side to the story. In a nutshell, it pays off to peddle the hype — or at least not to debunk it. Whitney Philipps, an assistant professor at Mercer University who specialises in web culture and the media, argued elsewhere that current “journalism privileges sensationalist framings as it is advertising-based, and so needs to predicate itself on outrage and emotional reactivity.” She stresses that sensationalist narratives feed particularly well into a social media hyper-reactivity, which is then problematised further by algorithms floating the most reacted-to stories to the top of trending lists and people’s social media feeds.

Unquestionably, the media are not the only ones who profit. An armada of pundits and airport-book sellers are all too keen to promote the idea that somehow the end of democracy is upon us due to Cambridge Analytica and the like. What easier way to get invited to TV shows and to sell more books than to peddle the story of the demise of the world as we know it?