We now know that a Russian organization spent two years trying to influence American voters using Facebook. In a blog post published on the evening of Sept. 6, Alex Stamos, Facebook’s chief security officer, wrote that the company has discovered about 3,000 political ads that ran between June 2015 and May 2017, paid for through fake accounts that “likely operated out of Russia.”

The 3,000 ads cost about $100,000 over the two years, according to Stamos’s post. They were “connected to about 470 inauthentic accounts and Pages” that violated Facebook’s policies. They focused largely on “divisive social and political” subjects like gun rights, immigration, and LGBT issues, and less on particular presidential candidates or the election itself. And Facebook suspects, though can’t confirm, that some of them were connected to a Russian troll farm in St. Petersburg called the “Internet Research Agency,” as one company official told the Washington Post (paywall). It has given the information to congressional investigators looking into Russia’s attempts to influence the 2016 US election and the Donald Trump campaign’s possible collusion.

However, what’s just as striking is what Facebook has not disclosed about the ad campaigns—including the nature of the ads themselves, how many users saw them, who those users were, and how they may have been targeted. Simply put, Facebook isn’t divulging (publicly, at least) any measure of how much influence the ads had on American voters.

Here are the questions Facebook has yet to answer and why it matters. We’ve sent this list of questions to Facebook; the company declined to comment, citing the ongoing government investigation.

What were the demographics of the users who saw the ads, and how were they targeted?

Although $100,000 and 3,000 ads is a relatively minuscule campaign, those ads could be highly effective if they were tailored and shown to just the right people.

Vast amounts of data are available on American citizens, from public voting records, to information collected online by market research firms, to what people themselves share publicly on social media. Organizations can collect that information, synthesize it, and apply behavioral modeling to create personality profiles. In principle these could be used to identify undecided voters in swing districts, figure out which issues mattered to each one (job creation for some, abortion for others, for example), and show them ads designed to influence their vote a certain way. That could theoretically make a relatively small ad campaign, triggered at the right time, decisive in an election.

Knowing whom the campaign targeted would give us insight into what was being tested and what the people behind it were after. Did they target swing counties and swing states in the run up to the election? Did they focus on groups of certain races, ages, or genders?

Facebook says only about one quarter of the ads in the Russian campaign were geo-targeted, i.e. aimed at people in a particular area. The implication is that the rest were randomly shown to people across the US. But geo-targeting isn’t the only way to do targeting.

David Carroll, a professor at the Parsons School of Design in New York, said in a phone interview on Sept. 7 that he believes the campaign could have used another Facebook ad feature called Custom Audiences. It allows advertisers to compile their own list of people, upload it to the platform, and match the people in the list to users on Facebook by their names, email addresses, and phone numbers. If advertisers had selected those names based on their demographics or location, Facebook would not know about it, and if the Russians had uploaded such a list, Facebook would therefore not know how the people on it were originally selected.

However, it certainly would be able to see where the ads ended up. Facebook is not giving out those details, and Custom Audiences aren’t mentioned in Stamos’s blog post nor in a big report (pdf) about information warfare that Facebook published in April.

In addition to demographics, there is also the question of reach. Facebook offers an array of ad services, and $100,000 among 3,000 ads could buy audience of many different sizes, depending on how they were targeted. Facebook is not giving those details either.

What were the 470 accounts connected to the ad campaign?

Non-Americans who spend money to influence an election in the US are breaking US laws and regulations. However, an anonymous Facebook official told the Washington Post that the company will not be releasing the names of the accounts and Facebook pages connected to the Russian campaign, citing its data policy and federal laws.

Keeping that information secret means that any Facebook user who saw posts from these accounts will never know that they were authored by a Russian propaganda operation. Further, it means we don’t know what sorts of materials, other than ads, those accounts published or how big their audiences were.

What was in the ads, and what types of ads were they?

We know the ads focused on divisive topics and, to a lesser extent, the election and candidates themselves. But we don’t know what forms they took. Were they images or videos or links to articles outside of Facebook? Did the accounts create content themselves, or reuse content created by others? If they linked to other websites, then which ones?

There are two main types of ads on Facebook: those that look like typical Facebook posts in your timeline, distinguished by the word “Sponsored,” and those that appear in the sidebar to the right of the timeline, like conventional ads. Both types can be used to spread fake news and divisive messages, but sponsored posts are harder to distinguish from regular posts by your friends, so they can look more legitimate.

In Stamos’s blog post, he didn’t say which kinds of ads the Russian campaign paid for, but wrote, “The behavior displayed by these accounts to amplify divisive messages was consistent with the techniques mentioned in the white paper we released in April about information operations.”

That white paper, however, described multiple fake news campaigns conducted during the 2016 presidential election, but did not mention advertisements, sponsored posts, or any other kinds of paid Facebook content. It focused on misinformation that users and groups posted normally and spread organically, without paying for it, and therefore without targeting it to specific users. It briefly mentioned fake news operations that targeted markets and regions, but did not elaborate.

Was there any overlap between the content used by the Russian campaign and other known campaigns?

Did the Russian operation push its own, unique message, or did it use the same links and content that other organizations—particularly American ones—were pushing at the same time? In particular, was there any overlap with content pushed by presidential campaigns or their associated groups and fundraising committees?

Researchers have found many cases in which bots and fake accounts tied to Russia have used social media to push messages consistent with the American far-right. A recently-launched project, Hamilton 68, continuously tracks Twitter accounts it says are tied to Russian influence operations. These accounts regularly post links to right-wing American news organizations like Breitbart and Fox News, and frequently retweet Donald Trump.

If the Russian ad campaign on Facebook shared the same content as American groups or campaigns during the US election, that wouldn’t in itself prove any kind of collusion between them. But it would certainly shed light on the Russians’ motives.

A related question is whether the Russian campaign used not only similar content, but also similar targeting tactics, to other campaigns. In 2016 both Ted Cruz and Donald Trump hired a data firm, Cambridge Analytica, which says it can use micro-targeting and behavioral modeling to identify and persuade voters online. The company is being probed by congressional investigators examining potential ties between it “and right-wing web personalities based in Eastern Europe who the US believes are Russian fronts,” Time reported in May. These are the same investigators to which Facebook provided information about the Russian ad campaign this week.

Political campaigns being advised by Cambridge Analytica would likely use Facebook’s ad features to target certain demographics or upload names of pre-targeted users to Facebook’s Custom Audiences. In theory, Facebook could examine the tactics used and the data uploaded in the Russian ad campaign, and look for similarities with American campaigns. Presumably, Facebook has given congressional investigators all of the relevant data that would allow for such a comparison.