Last week Facebook reported that a Russian troll farm had purchased $100,000 worth of ads on its social network, content intended to send “divisive social and political messages” to millions of Americans. This revelation does not shock. We already know Russian actors attempted to interfere with the presidential election. We already know that fake news sites published hoaxes and spun facts to benefit Donald Trump’s campaign. And we already know that Facebook did nothing to stop it until a public outcry.

What did Russia get from Facebook for its $100,000? This, and worse, seen by 23 million to 70 million people. https://t.co/KbAGiJG7SE pic.twitter.com/JV0792xwm5 — Kevin Poulsen (@kpoulsen) September 8, 2017

It’s unlikely that these ads tipped the election in Trump’s favor. But they are still instructive: Like all good propaganda, they tap into existing prejudices and polarizations, spreading fact-free messages that both scapegoat marginalized communities and amplify the fear factor. They also underscore how these messages are being delivered. In an era that has seen the emergence of comprehensive, right-wing propaganda campaigns, two tech companies—Facebook and Google—have started to monopolize how we consume information. And both entities have become useful megaphones for propagandists.

Far-right propaganda is now everywhere, whether it is Trump TV being broadcast on Facebook or fake news items that appear as advertising on Google. The problem is exacerbated by scale. “Google has an 88 percent market share in search advertising, Facebook (and its subsidiaries Instagram, WhatsApp and Messenger) owns 77 percent of mobile social traffic, and Amazon has a 74 percent share in the e-book market,” Jonathan Taplin recently noted in The New York Times. “In classic economic terms, all three are monopolies.”

News outlets, meanwhile, struggle to achieve the same reach that non-state actors have been able to so easily purchase. Facebook and Google didn’t make Trump president, and Mark Zuckerberg is not some unwitting Russian agent. But they have become the most powerful arbiters of what people read and watch online, and the consequences of that are evident: The far-right thrives in part because its message saturates the internet.

Both companies are loath to claim any responsibility for this state of affairs, insisting that they are neutral vessels for the information their users and clients generate. Zuckerberg famously denied his site had a problem at all, saying that it was “crazy” to think that fake news influenced the election and that “voters make decisions based on their lived experience.” Later he backtracked. In February, he published a meandering 6,000-word Facebook note that claimed the site would take steps to censor “sensationalist” headlines and content. It also eventually blocked ads from known fake news sites and introduced a “flagging” system intended to label hoax posts.