All hell has broken loose about the link between Cambridge Analytica, a British marketing firm that the Donald Trump campaign hired to drum up votes, and Facebook, which might have improperly supplied Cambridge Analytica with valuable information about 50 million American voters in 2014.

A whistleblower, Christopher Wylie, who left Cambridge Analytica that same year, is now spilling his guts about the role he played in the data transfer. As a result, Facebook’s stock has shed nearly $50 billion in value, Facebook investors are suing the company, and the Federal Trade Commission (FTC) has opened an investigation.

Should we care about any of this? Not really.

I conduct scientific research on online manipulation, and, at this point, I probably know more about it than anyone else in the world. Online manipulation of our thinking, opinions, purchases and votes is real, powerful and scary; the numbers are mindboggling, in fact. But Cambridge Analytica is not the problem, mainly because the methods it uses to manipulate are not very powerful. Compared with the real threats out there, they are, in fact, trivial.

Cambridge Analytica’s main method of influence was to send people targeted ads on Facebook, just as thousands of other companies do every day. A targeted ad is one designed to capture the attention and clicks of particular people. The more you know about the your audience, the more successful you will be in designing an ad that will draw their clicks, and those clicks in turn will drive people to web pages that contain persuasive content.

Cambridge Analytica claims to have purchased 5,000 “data points” — that is, digital facts — about every voter in America before the 2016 election, as well as to have employed new psychometric techniques to figure out how best to influence those voters. Using Facebook’s Ads Manager – as thousands of companies do each day to match up their products with potential buyers — Cambridge Analytica’s data and methodology allowed them to display their ads to hand-picked Facebook users, and that, in turn, should have increased what marketers call the “clickthrough rate” (CTR) — the proportion of people who click on their ads.

Why is this no big deal? Because everybody does it – not just vendors trying to sell us their wares, but all of the political campaigns. Facebook even brags to its potential political advertisers about its power to “drive a successful campaign” and target “constituents and supporters.”

During the 2016 campaign, Hillary Clinton had far more powerful digital tools at her disposal than Trump did. To give you just one example, in 2015, Google czar Eric Schmidt — who, at one point, had offered to oversee Hillary’s digital campaign — set up a secretive tech company called The Groundwork, the sole purpose of which was to make Hillary president. It was staffed by many of the same people who served on Obama’s 2012 tech team, which was also supervised by Schmidt.

5,000 data points? P’lease. The Groundwork had access to Google’s entire data base — hundreds of thousands of digital facts about each of us, including our search histories and emails — so much information, in fact, that Google could likely have computed the election results days before Election Day. (Although Google obviously screwed up on its Electoral College calculations, Clinton won the popular vote by nearly 3 million, almost certainly with Google’s help.)

Cambridge Analytica’s manipulations are trivial, in short, because they are competitive. This was explained last year in a persuasive essay by Frederike Kaltheuner of Privacy International. Kaltheuner asked: “Did Cambridge Analytica influence the Brexit vote and the US election?” Her answer: “This is very unlikely. It’s one thing to profile people, and another to say that because of that profiling you are able to effectively change behavior on a mass scale.

Cambridge Analytica clearly does the former, but only claims (!) [sic] to succeed in the latter.”

Competition is one of three reasons why Cambridge Analytica’s methodology is weak. The second reason is visibility: While it’s true that people are unaware of how much data are being collected about them online, they can see those targeted ads, and people are so bombarded with ads that they generally ignore them. Overall, the CTR for online display ads is just 0.05 percent (5 clicks per 10,000 impressions), and even for highly targeted ads on Facebook, the CTR is generally well under 2 percent. That means more than 98 percent of users don’t click.

The third reason is especially important, and that’s confirmation bias. Because people can see the ads, they tend to click on ads with messages that match the beliefs they already have. That’s why new analyses of fake news stories suggest that they have little or no effect on elections. Fake news stories draw attention, but they don’t change minds, because only sympathetic readers view and believe them.

So who has the power to change minds and flip elections? It’s the platforms: and that means Google, Facebook, and, to a much lesser extent, Twitter. This is because no matter what content people are generating — and when Cambridge Analytica is placing ads, it is a content provider — Google and Facebook have complete control over what people will see (“filtering”) and in what order information will be displayed (“ordering”).

Years of research I have been directing on the Search Engine Manipulation Effect (SEME) and other new means of online manipulation has shown the mind-blowing power of filtering and ordering to rapidly shift people’s opinions — by up to 80 percent in some demographic groups. These shifts occur without people’s awareness and without leaving a paper trail for authorities to track. Fake news stories and YouTube videos leave trails, but ephemeral stimuli like search results are generated on the fly and then disappear, leaving no trace. It’s those ephemeral stimuli – controlled by unregulated platform algorithms — that are the real threats to freedom and democracy.

Those lists we keep seeing — like Google’s “autocomplete” search suggestions or the news feeds on Facebook — those determine what content we will see and what content we will never see — in other words, what content will be censored. That, as George Orwell warned and as Chinese officials know well, is how you control people: by limiting the information flow.

Cambridge Analytica doesn’t have the power to do that; only the big platforms can.

In 2016, the world was disrupted by hubris. Executives at Google and Facebook were sloppy and complacent – overconfident about the outcome of the election. Asleep at the wheel, especially during the final weeks of the campaign, they allowed conservative content to spread like wildfire on their platforms. But as historian Niall Ferguson has been reminding us in recent months, they will never let this happen again.

That is why we need to pay close attention to the new calls we’re hearing from thought leaders on both the left (e.g., George Soros) and the right (e.g., Tucker Carlson) for the strict regulation of the Big Tech platforms. The EU has already put a regulation plan on the table.

If you have the least doubt about where the real power lies, just glance at recent headlines. Facebook not only just banned Cambridge Analytica from its platform, it even banned the whistleblower!

Don’t be fooled. Content doesn’t matter anymore. Millions of individuals and organizations are generating content every day, but only two companies are deciding how that content will be filtered and ordered for billions of people worldwide.

Just two.

Robert Epstein (@DrREpstein) is senior research psychologist at the American Institute for Behavioral Research and Technology, the author of 15 books on AI and other topics, and the former editor-in-chief of Psychology Today. He is currently working on a book called Technoslavery: Invisible Manipulation in the Internet Age and Beyond.

The views and opinions expressed in this commentary are those of the author and do not reflect the official position of The Daily Caller.