Photo by Element5 Digital

We, the undersigned, met in December at Betaworks Studios to discuss immediate steps the major social media companies can take to help safeguard our democratic process and mitigate the weaponization of their platforms in the run-up to the 2020 U.S. elections. Collectively, we represent a Facebook co-founder, former Facebook, Google and Twitter employees, early Facebook and Twitter investors, academics, non-profit leaders, national security and public policy professionals. While many of us are working on longer-term ways to create a healthier, safer internet, we are proposing more immediate steps that could be implemented before the 2020 election for Facebook and other social media platforms to consider.

Technology and social media have over the last decade opened opportunities and given voice to so many, most importantly from underrepresented communities. Never before have so many people had access to communicate and find information. However, the scale at which a small number of social media platforms now operate is also unprecedented in human history. Never before have we had a handful of companies mediate how two billion people communicate with each other, how we find information, and how we consume media. Likewise, platforms are being compromised at a scale that few could have imagined, as seen, for example, by the Russian interference in the U.S. 2016 elections. There is ample evidence that 2020 will only accelerate that trend.

Addressing the systemic issues and business decisions that have allowed social media platforms to play such an outsized role in our democratic process will take a larger, whole-of-society approach. A number of us do not think platforms will self-regulate, and we are not optimistic that the U.S. government will legislate or regulate in time to protect our democratic process therefore we have 10 ideas that the platforms could enact now. As we are less than 11 months from the next U.S. presidential election, there are intermediate steps — some quite simple — technology platforms could take to mitigate damage and the flow of disinformation — without censoring organic content or reversing their current policy of not fact checking political ads. In our collective experience, these recommendations are doable, noncontroversial, and could have meaningful impact.

This is a living document. Between now and November 2020, we will continue to update this list of ideas, as we continue to seek input from the broader community, and hopefully remove those that the platforms enact.

What can be done … now

1. Remove and archive fraudulent and automated accounts

Social media platforms are designed around connecting to real people. But the reality is that fraudulent and fake accounts are employed by a range of bad actors to seed and spread disinformation. A fake account is an account where someone is pretending to be something or someone that doesn’t exist. For instance, the Financial Times reports there are nearly 400 million duplicate, misclassified or false accounts on Facebook — a number that is growing relative to the overall user base. Likewise on YouTube, false accounts game the platform’s algorithms to push inflammatory content. It is clear that all of the platforms must urgently work to identify and remove fraudulent accounts and report on their removal.

2. Clearly identify paid political posts — even when they’re shared

All paid posts, especially those on election year issues or on behalf of candidates, should be more prominently labeled than they are today. Further, today when a paid post is shared by a user, labeling disappears (see: this article for details). We believe labeling should continue when people share and organically re-share content, so that its provenance as a paid post is clear.

3. Use consistent definitions of an ad or paid post

Google, Twitter and Facebook do not share common language or definitions for political ads — the primary social media companies should agree on a common, broad, set of definitions for political ads and adopt them across platforms. For instance, Google and Twitter use narrower definitions than Facebook, focusing on express advocacy or electioneering communications. Analysis of the IRA ads on Facebook and Instagram shows that about 8% of the ads mentioned candidates or parties, while 30% of the ads included words related to elections and voting.

4. Verify and accurately disclose advertising entities in political ads

Sources who pay for political posts should be fully verified and accurately labeled. Similar to ads running on television, the “paid for by” disclaimer should match the advertiser or group that is running the campaign. Currently, anyone who pays to amplify a post can label it with the name of their choosing, allowing any “made-up” group to appear to be a legitimate PAC. A study found that more than half of the sponsors of political ads on a social media platform was unidentifiable, untrackable groups that did not leave any public footprint. A cross-check between PAC registration or Federal Election Commission numbers and advertisers, for example, should be considered.

5. Require certification for political ads to receive organic reach

While we’d prefer that all ads be fact-checked, platforms should consider creating a two-tiered system in which only ads by candidates or PACs that meet a secondary certification bar would receive additional organic reach. The certification criteria would be publicly transparent whether it was internally determined or determined an independent third-party.

6. Remove pricing incentives for presidential candidates that reward virality

The current pricing structure on Facebook creates a financial incentive for political ads to be salacious or outrageous to ensure maximum virality, and target the most partisan, narrow constituencies. A pricing structure that rewards breadth of engagement rather than depth would be more appropriate for political advertising for general election candidates. Additionally a limit on targeting to less than 50,000 people, the level above which TV and radio ads trigger election rules, would forestall many issues we are already seeing in regards to hyper targeting. These changes would dampen incentives while providing open access to all.

7. Provide detailed resources with accurate voting information at top of feeds

To stem confusion about voting, the tech platforms could offer the most basic and important information to people: when, where, how and who can actually vote. It is fundamental to democracy and non-partisan. Platforms in some cases already provide users with basic polling location finder information. They should extend the period this information is available, amplify its prominence, and supplement it with information on upcoming election dates, how to vote early or absentee, and any other resources provided by independent or governmental sources to encourage informed voting. Platforms should use their incredible power and reach to encourage voting, including the use of push notifications, interstitial notifications in feeds, and in inboxes to ensure maximum awareness of when and how to vote.

8. Provide a more transparent and consistent set of data in political ad archives

In response to the 2016 elections issues major tech platforms including Google, Facebook, and Twitter launched political ad libraries as a part of self-regulatory measures. Although it is a laudable first step, the current ad libraries fall short of providing full transparency around the sophisticated targeting tools advertisers use, which differentiates them from traditional media advertising. We recommend increasing transparency by adding more targeting data in the ad library. Currently, tech platforms’ ad libraries provide ad impressions (i.e., the ultimate outcome of audience targeting, user engagement, and other algorithmic decision), which is an important and useful measure of ad exposure. However, they do not offer discrete information about ad targets and targeting methods (e.g., custom list; look-alike audience generation; interest-based options) and as a result, it still raises normative concerns on ethical campaign practices, data use, and voter privacy. This data should also be available in real time and API accessible.

9. Clarifying where they draw the line on “lying”

Facebook’s current policy is to allow politicians to include lies in political ads. Facebook must be clearer on where they draw the line. For instance, Facebook has already said it will not allow false information about voting, or disinformation about when or how the census occurs. Will they allow politicians to use altered video or audio of other candidates in ads? What about fabricated video, so-called deep fakes or shallow fakes?

We believe Facebook needs to be explicit about whether there are any limits or boundaries now, rather than wait to respond to a potential ad that uses clear disinformation tactics.

10. Be transparent about the resources they are putting into safety and security

Facebook is one of the few platforms that has disclosed the resources spent on safety and security. We propose further transparency on exactly what they are spending the money on and how much is used on election integrity and security to enable public scrutiny and potential government review. Other technology platforms should also disclose what they are spending and inform the public about their commitment to safety.

Conclusion

The meeting we had in December grew out of discussion Chris Hughes and I had about what technology companies could do to safeguard the election. We recognize that the platforms continue to update and evolve their policies in real-time. And that others, including Facebook employees have offered suggestions that we support. We are committed to continuing this conversation at Betaworks Studios over the year. Please join us for relevant events — and please comment below — we welcome input from others who are working in this space.

Thank you to everyone who participated and in particular to Yael Eisenstat for help writing this.

John Borthwick and Chris Hughes