Today Facebook is expected to tell a US Senate committee that posts created by Russia-linked accounts reached 126 million Americans during and after the presidential election. Twitter will also have its own revelation, that more than 30,000 accounts linked to Russia generated 1.4 million tweets during the final stretch of the campaign.

The admission comes just a day after Robert Mueller's investigation into collusion between the Trump campaign and Russia revealed indictments against three campaign aides, and it all adds to a sense that there has been something fishy going on in US democracy.

Meanwhile, back in the UK there has been a drip, drip of stories about the Brexit campaign's spending on digital ads during the EU referendum, led largely by reporting from the Observer. In particular, stories have focused on the Leave campaign’s relationship with a data and ad targeting company called Cambridge Analytica, which is ultimately owned by one of Trump's biggest backers, Robert Mercer.

An extra level of frisson has been added to the reports in recent days, with news that Cambridge Analytica approached Julian Assange – still hiding out in the Ecuadorian embassy in London – about the emails hacked from the Hilary Clinton campaign which are thought to have helped swing the result Trump's way.

The web of connections between Trump, Brexit and Russia, all strung across a social media landscape with Facebook at its core, suggests that yes, there are people who have colluded to use modern technology to nudge democracy in the direction they desire. Hopefully, continued picking at the threads by investigators, politicians and journalists will uncover exactly how far collusion, and conspiracy, go.

And yet the hunt for thrilling narratives risks obscuring a far bigger story about how our digital ecosystem is shaping democracy and society without any need for malign interference.

Much of the breathless reporting about the role of the web in electing Trump and triggering Brexit appears to stem from a lack of understanding about how digital advertising works. The draw of advertising online, and on Facebook in particular, is that you can show specific ads to specific groups of people, rather than buying up a billboard or newspaper page and praying the right people see it. If Russian fronts were buying online political ads in the run-up to a vote, then that may be in breach of law in both the US and UK. If the campaigns on both sides of the Atlantic weren't doing it, that would just be incompetence.

But focusing on individual examples of potential illegality misses the more fundamental point about almost all digital advertising – that it is deliberately, and explicitly, divisive. It works on dividing groups of people into those who will respond to different products, messages and ideologies. You can perfectly legitimately divide Facebook's audience into those who like cupcakes, and those who don't, or those who hate Jews and those who don't, then make sure you only show the ads designed for cupcake lovers or Jew haters to the people who will respond well to them.

We shouldn't just be asking who's actively manipulating or abusing this system, but what that system itself means for the future of political digital campaigning.

Yet even a greater understanding of digital advertising only gets at part of the bigger picture, in which social media, and Facebook in particular, with its two billion-plus users, is changing the way we interact with each other.

By algorithmically prioritising posts that it thinks will encourage us to spend more time with Facebook, the company is super-charging human behaviours. Facebook tweaks its algorithm based on signals – likes, clicks, comments and much, much more – that it believes indicate what we want, and what will keep us on Facebook. That means that along with our natural love of cute animals, Facebook plays to less innocent traits such as our predisposition towards views we agree with, our discomfort at being challenged, the ease with which we become outraged and, ironically, the seductive appeal of conspiracy theories.

Like our taste for high-carb foods, those traits exist for an evolutionary purpose. But just as an unlimited supply of biscuits often leads to obesity, feeding us with more and more of the digital content we are hard-wired to respond to can leave us ill-informed and insular. And, because Facebook is all about personalisation, each experience of this world is unique, tailored and moulded to fit our own special set of prejudices. What one person sees is not the same for any other, and so our shared experience and terms of reference are diminished.

So when we wonder why the UK can be so bitterly split on Brexit, or why a hardcore of Trump backers will dismiss the clearest evidence of his character flaws, we need to look at a context that includes not just economics, or demographics, but also the digital structures that dictate our online lives.

This is not to say we shouldn't be concerned that Russian troll farms are using the web's anonymity to influence conversations, or that billionaires are buying targeted ads to sway the electorate. We may even one day be able quantify the impact that deliberate, malicious activities online have had on the votes for Trump and to leave the EU. They may even turn out to have been decisive.

But uncovering how our online world has been manipulated by bad people is a lot less important than working out how it is affecting us without any malign intervention at all.