Networked technology has the power to expand our democracy and give more people a say than ever before. The Internet has given an urgently needed megaphone to young people organizing to stop climate change and movements working to hold powerful people and institutions accountable. But over the last decade we’ve witnessed how this same power can be used to silence voices, manipulate opinion, and trample our basic rights at a mass scale.

As the US approaches an election that will shape the future of our nation, there’s an increasing focus on the role that companies like Facebook and Google play in our information ecosystem. The Internet is becoming more and more centralized. Silicon Valley giants have amassed unprecedented power to influence public opinion and control the free flow of ideas. And their monopolistic data harvesting business models make it possible to weaponize that power in ways we are only beginning to understand.

What we should have learned from Cambridge Analytica, but didn’t

While concerns about big platform’s privacy and content moderation practices have been growing for years, they reached a boiling point with the Cambridge Analytica scandal. But did we learn anything?

People were rightfully outraged by Cambridge Analytica’s scheme to use misappropriated personal data to micro-target and manipulate individual voters. But the real scandal is that aside from misusing data, Cambridge Analytica was essentially using Facebook as it is designed to be used.

Facebook has built the greatest platform in the history of the world for influencing human behavior on a massive scale. Simply put, Facebook ads are too effective. Most of the time the stakes are low, people might be persuaded to buy something they don’t need or use an inferior laundry detergent. But when it comes to political advertising, the very foundations of democracy and a free society are at risk. That is why we need to insist on more significant, systemic changes to how these ads work.

Cambridge Analytica showed us how the enormous amount of data that social media companies vacuum up can be used not just to invade our privacy, but to manipulate how we think. But since then, public discussion has focused mostly on speech itself, rather than on the data harvesting and algorithmic amplification that turns speech into a weapon.

Instead, we should focus on the underlying, systemic problems. Platforms can and should be held accountable for business practices that are fundamentally incompatible with democracy and human rights. But the details matter. Platforms need to listen to the experiences of marginalized people when crafting their moderation and ads policies.

From the Stop Online Piracy Act (SOPA) to SESTA / FOSTA we’ve seen too many times how misguided tech policy decisions can backfire, and harm the very people that lawmakers and policy wonks say they’re trying to protect. But it’s also not enough to say "hands off the Internet," at a time when it’s clear that Silicon Valley giants are exploiting that mentality to profit off of abusive practices. So what should we be calling for?

Some ideas for how to un-break the Internet

There is no one silver bullet policy that will fix everything that’s wrong with the Internet or prevent the spread of hate speech and misinformation. Many proposals we’ve seen, from both Democrats and Republicans, are deeply misguided and would lead to the widespread censorship of marginalized voices, or other adverse effects.

Here are some ideas that we think might work, in the interest of expanding the conversation about content moderation. We propose that instead of playing "whack-a-mole," calling out platforms over specific accounts or posts they allow, we should take a more systemic approach to platform accountability, and call for:

100% transparency on advertising policies and spending.

Facebook has the ad library, but it’s barely functional. And sponsored content posts like the ones Mike Bloomberg is buying up en masse weren’t even included until recently. We need every Big Tech company to offer total transparency into the ads that run on their platforms, who paid for them, the reasons they were approved or disapproved, and who they were targeted at. Ad transparency libraries should be easily searchable and provide researchers and journalists with the tools and data they need to provide meaningful oversight.

Facebook has the ad library, but it’s barely functional. And sponsored content posts like the ones Mike Bloomberg is buying up en masse weren’t even included until recently. We need every Big Tech company to offer total transparency into the ads that run on their platforms, who paid for them, the reasons they were approved or disapproved, and who they were targeted at. Ad transparency libraries should be easily searchable and provide researchers and journalists with the tools and data they need to provide meaningful oversight. More transparency on content moderation decisions. Paid advertising is a small fraction of all online speech. To protect people’s rights, we need full visibility into how content moderation decisions are made. Platforms should provide journalists and researchers access to a complete archive of every post that has been removed for violating speech policies and why it was removed. There should be exceptions to protect privacy, such as posts depicting self harm or child exploitation. Platforms should provide data on the number of such posts they remove. They should also provide transparency into posts that were amplified, or de-prioritized, and why. Transparency into how platforms police speech is the only way to ensure that, for example, policies intended to combat hate groups don’t lead to the automated removal of videos by anti-racist groups like the Southern Poverty Law Center, or that trans women responding to transphobic attacks aren’t systematically suspended from Twitter.

Paid advertising is a small fraction of all online speech. To protect people’s rights, we need full visibility into how content moderation decisions are made. Platforms should provide journalists and researchers access to a complete archive of every post that has been removed for violating speech policies and why it was removed. There should be exceptions to protect privacy, such as posts depicting self harm or child exploitation. Platforms should provide data on the number of such posts they remove. They should also provide transparency into posts that were amplified, or de-prioritized, and why. Transparency into how platforms police speech is the only way to ensure that, for example, policies intended to combat hate groups don’t lead to the automated removal of videos by anti-racist groups like the Southern Poverty Law Center, or that trans women responding to transphobic attacks aren’t systematically suspended from Twitter. A moratorium on algorithmic amplification for social media posts. It’s one thing for platforms to allow people to post what they want. It’s another thing for those platforms to put their thumb on the scale, artificially amplifying content that they think will generate engagement and showing it to the people they think will be most susceptible to it. (They also artificially suppress content that they think people don’t want to see, or in order to coax people and organizations into paying for ads.) From Facebook’s newsfeed to YouTube recommendations, we’ve seen that the platforms are just not able to make these decisions responsibly. We agree with the platforms that they should not be the arbiters of what is true or false, but if that is the case then they should also not be the arbiters of what goes viral and what doesn’t. We’re calling for an industry-wide moratorium on this practice, with exceptions for user controlled sorting, like blocking certain types of posts. More transparency will help to determine if there are ways to do this ethically or not.

It’s one thing for platforms to allow people to post what they want. It’s another thing for those platforms to put their thumb on the scale, artificially amplifying content that they think will generate engagement and showing it to the people they think will be most susceptible to it. (They also artificially suppress content that they think people don’t want to see, or in order to coax people and organizations into paying for ads.) From Facebook’s newsfeed to YouTube recommendations, we’ve seen that the platforms are just not able to make these decisions responsibly. We agree with the platforms that they should not be the arbiters of what is true or false, but if that is the case then they should also not be the arbiters of what goes viral and what doesn’t. We’re calling for an industry-wide moratorium on this practice, with exceptions for user controlled sorting, like blocking certain types of posts. More transparency will help to determine if there are ways to do this ethically or not. A moratorium on micro-targeted political advertising. The ability to micro target ads based on a complete psychological profile of a person, as Cambridge Analytica claimed to do, presents unique problems that our democracy is not currently equipped to address. It takes the truth-distorting spin that has been common on outlets like CNN and Fox News for years and turns up the volume. You can make much more outrageous claims if the only people who are going to see them are those already primed to believe you. There are ways to make money from advertising without resorting to this practice. We think companies like Facebook should immediately cease micro-targeted political advertising, until there are proper policies in place to prevent abuse and discrimination. Google currently only allows targeting of election ads based on age, gender, and geography — that’s better than Facebook’s policy, but it should be expanded to cover all political ads, not just ones run by candidates.

The ability to micro target ads based on a complete psychological profile of a person, as Cambridge Analytica claimed to do, presents unique problems that our democracy is not currently equipped to address. It takes the truth-distorting spin that has been common on outlets like CNN and Fox News for years and turns up the volume. You can make much more outrageous claims if the only people who are going to see them are those already primed to believe you. There are ways to make money from advertising without resorting to this practice. We think companies like Facebook should immediately cease micro-targeted political advertising, until there are proper policies in place to prevent abuse and discrimination. Google currently only allows targeting of election ads based on age, gender, and geography — that’s better than Facebook’s policy, but it should be expanded to cover all political ads, not just ones run by candidates. Equalizing cost-per-click for political campaigns. The way Facebook’s ad policies work right now, politicians who are willing to spread lies have an advantage. Incendiary ads are more likely to generate clicks, which makes them cheaper to run. Social media companies could level the playing field by charging the same fixed, transparent cost-per-click for all political candidate campaigns. While we’re uncomfortable with the idea of Big Tech companies determining the veracity of political speech, we also don’t think politicians should get a discount for being willing to lie or exaggerate.

The way Facebook’s ad policies work right now, politicians who are willing to spread lies have an advantage. Incendiary ads are more likely to generate clicks, which makes them cheaper to run. Social media companies could level the playing field by charging the same fixed, transparent cost-per-click for all political candidate campaigns. While we’re uncomfortable with the idea of Big Tech companies determining the veracity of political speech, we also don’t think politicians should get a discount for being willing to lie or exaggerate. Strong Federal data privacy legislation. So many issues that are seen as problems with speech are actually problems with data, which allows that speech to be weaponized. We need Congress to stop dragging their feet and pass meaningful Federal data privacy laws that dramatically reduce the amount of information that companies can collect on us in the first place, and prevent them from selling that data to advertisers in ways that can be used to abuse our rights.

So many issues that are seen as problems with speech are actually problems with data, which allows that speech to be weaponized. We need Congress to stop dragging their feet and pass meaningful Federal data privacy laws that dramatically reduce the amount of information that companies can collect on us in the first place, and prevent them from selling that data to advertisers in ways that can be used to abuse our rights. Decentralized alternatives to Big Tech. Silicon Valley’s centralized, surveillance capitalist business model is not the only way the Internet can be organized. Decentralized alternatives like open source blockchain projects and crypto networks could provide some solutions to many of the problems we’re seeing on centralized platforms. We should encourage lawmakers to be thoughtful when approaching these technologies, and ensure that policies aimed at preventing scams and money laundering don’t inadvertently undermine internet freedom and privacy. And we should push for policies like requiring adversarial interoperability, to foster competition and undermine the monopoly status of companies like Facebook and Google.

We don’t have all the answers. But we hope these ideas will help the civil society community brainstorm. For more of our thoughts on this, check out this interview our deputy director Evan Greer did with the folks over at EFF.