Philip N. Howard is a professor of Internet studies at the Oxford Internet Institute and Balliol College at the University of Oxford. Bence Kollanyi is a researcher at the Oxford Internet Institute.

Facebook and Twitter have taken the important step of handing over thousands of ads to Congress that were bought and circulated by Russian strategists to influence our elections. These examples show just how expertly the Russian propaganda machine can craft messages that stoke fear, hatred and panic among American voters.

But sharing examples is only the first small step in what should be a systematic analysis of foreign political influence on American voters through online networks. Facebook and Twitter are unique as media companies because they provide us with platforms for communicating with our networks of family and friends. The next step in understanding Russian interference involves sharing network data — not just ad examples.

At the Oxford Internet Institute, we have been studying how governments use social media to manipulate public opinion — in their own country and in others. Information provided by Facebook and Twitter has already allowed us to fill in some of the details.

We know, for example, that Russian strategists bought 3,000 Facebook ads with a budget of $100,000 and that RT, the news outlet funded by the Russian government and previously known as Russia Today, spent $274,100 on 1,823 promoted tweets. We know Russians set up and managed fake accounts of users who pretended to be voters. We have found them managing networks of highly automated accounts. We know they use such accounts to direct political conversation among their own citizens (some 45 percent of Russian Twitter is managed by bots). We know they know how to use Twitter's and Facebook's algorithms to push propaganda at voters in democracies. And we now know what some of the ads look like.

Yet to really understand the influence of ads — as well as the impact of surreptitious content that isn't as obvious as a purchased ad — we need to understand the networks, not the examples. Twitter recently offered a helpful blog post on its reaction to the ad examples Facebook shared with lawmakers. Twitter used Facebook's list of known fake American voters originating in Russia to identify a set of suspicious users on its own platform. Twitter then closed a handful of accounts and provided the Senate Intelligence Committee with examples of the ads that the main Russian news agency, RT, had pushed across the platform.

Because Twitter now suggests that content from Russian news agencies may be a proxy for the larger campaign of dark ads and bot networks, we can further visualize how the Russian content targeted voters around the United States. For example, my team and I recently found that junk news — propaganda and ideologically extreme information — was concentrated in swing states. Russian content also appears to have been steered into states where the Trump campaign was doing well.

But working without the cooperation of Twitter and Facebook means that our models aren't perfect. No doubt, Twitter and Facebook have higher-quality data on all this. They certainly employ some of the best network analysts and data scientists in the world. Yet it has taken an FBI inquiry, congressional investigations, nearly a year of bad press and pressure from outside researchers such as us to dislodge some examples of Russian interference.

Evidence of campaign coordination and election interference will be in the network data, not just the ad buys. The next step should be open collaborations that explain network effects and help restore public trust in social media.