Fake news captures attention and is corrosive. Like many similar social problems online, it is a symptom of surveillance capitalism. Surveillance capitalism explains the economic incentives that drive media production and distribution on internet platforms like Facebook. The business model used by internet platforms relies on collecting data and using that data to create profiles of users to predict their interests and behavior.

This allows Facebook to serve tailored advertisements to users. It’s this matching of advertisements to people that makes Facebook incredibly valuable to advertisers — an advertisement that more closely matches someone’s interest is more likely to end in a sale — and companies are willing to pay top dollar for this.

The economic incentives that push Facebook to collect as much user data as possible also explain why we should not rely on Facebook to stem the tide of fake news.

The economic incentives that push Facebook to collect as much user data as possible also explain why we should not rely on Facebook to stem the tide of fake news. Facebook does not have a financial stake in dispassionately disseminating true or unbiased information. Instead, it has a financial incentive to increase traffic on the platform (more eyeballs to view ads) and increase user participation through cheap, data-generating behavior, such as superficial “Likes” and shares that Facebook can then analyze to better model and predict user behavior.

Shoshana Zuboff writes that “demanding privacy from surveillance capitalists [like Facebook] ... is like asking Henry Ford to make each Model T by hand.” Similarly, to the extent that fake news enables continued surveillance and tracking by Facebook, we should not expect the company to be interested in genuine solutions that might threaten its business model.

Fake news may be only a symptom of a deeper set of political economy issues, but studying the phenomenon usefully highlights two distinct types of more general social problems that plague networked media and require different interventions.

The first type of fake news — hoaxes — rely on the rapid-click business model that is sometimes associated with clickbait. Creators of hoaxes don’t really care about the content or substance of the fake message; they’re not trying to change anyone’s beliefs or affect their behavior beyond manipulating them into clicking.

Facebook and other proprietary platforms enable this business model because it coincides with their incentives as surveillance capitalists.

A potential solution to this species of fake news is to create new platforms — like federated social networks — that do not rely on advertising revenue and, by extension, the economic incentives that force Facebook and other proprietary social networks to optimize for clicks and ignore user privacy to more effectively serve ads.

Federated social networks — like Diaspora — don’t nudge users to overshare information or structure their sites to encourage clicks and other superficial engagements that can be analyzed. Their business model doesn’t require it.

A potential solution to this species of fake news is to create new platforms — like federated social networks — that don’t rely on advertising revenue.

So far, federated social networks have only gained fringe acceptance. There are many possible reasons for this, but network effects and the high costs of switching create a significant barrier to overcome. Simply put, even if these alternatives offered significant advantages over Facebook, it is difficult to motivate people to leave the platform that they’ve grown accustomed to and where they’ve already built an extensive social network.

Even if federated social networks were to gain widespread adoption, they may not be well suited to act as media distributors. Federated social networks offer some improvements for news distribution because they do not rely on the economic incentives that drive Facebook to select stories — like hoaxes — that generate lots of clicks but pollute the news ecosystem.

Yet simply removing the economic incentives that drive media distribution on social networks may not be a full solution, because it doesn’t target the underlying motivations that drive a second species of fake news: Propaganda.

The creation and distribution of propaganda isn’t motivated by making money through superficial engagements like clicks. Instead, the goal is to affect beliefs, preferences and attitudes by cultivating false or intentionally misleading narratives. Facebook and other platforms enable these actors because they deny that they are media companies and refuse to develop or exercise editorial expertise.

Facebook needs to act like a media company and make determinations about newsworthiness or the credibility of certain sources and articles.

In some sense, Facebook has backed itself into a corner with the platform objectivity narrative and a refusal to admit that it’s a media company. Situations emerge where Facebook needs to act like a media company and make determinations about newsworthiness or the credibility of certain sources and articles. When Facebook starts making these decisions, a large subset of its user base falls back on Facebook’s own cultivated narrative that it’s just a platform and should not be making these decisions.

In short, people don’t really trust Facebook to be making these decisions. Nor should they.

We need a way forward that addresses both the perverse economic incentives and expertise issues that justifiably undermine the public’s faith in networked news distribution. What would it take to create a new, trusted social networking platform that combats hoaxes and propaganda while serving the public interest more generally?

The BBC provides some clues. The key ingredients for a trusted media platform are an institutional structure that supports independence and a firm commitment to cultivating and exercising editorial expertise. The BBC has these features and, as a result, it is widely judged to be a decent model of a trusted, competent public media platform. Of course, the BBC is not perfect; it does not (yet) manage a social network; and it is not the only viable model.

NPR and an array of other publicly minded companies could focus on developing a trusted social media platform, but as we highlight below, the BBC has already cultivated the technological and social capital required for a trusted social networking platform and could serve as a possible model for what could be built.

Trust

The BBC is deeply trusted by the public. The level of public trust is so strong that it could motivate people to switch over from Facebook because of the BBC brand. Of course, a BBC social network would not have to operate as a substitute, such that people would have to choose one or the other. Many U.K. citizens would presumably choose both. The two social networks might complement each other.

The public trusts the BBC to cover stories impartially, and also trusts that the BBC covers a wide variety of topics. This trust translates nicely to the role that the BBC would play as the operator of a social media platform. The BBC could curate news sections that continued its missions of covering a diverse set of issues while leveraging its impartiality.

The BBC’s reach and trust would allow its social media platform to help establish a baseline set of facts that make debate across ideological lines possible and pressure against the development of filter bubbles and echo chambers.

Expertise

The BBC has media expertise, which it can draw from to make sound editorial judgments and create content specific to the platform — making it better-situated than both federated social networks and Facebook to create and deliver news content.

Media expertise is crucial for disseminating news in a networked environment. A truly peer-to-peer platform, like federated social networks, may be effective as interpersonal communication platforms, but this model does not account for the expertise required for mass-media distribution. Effective mass-media distribution requires nuanced judgments about newsworthiness as well as identifying and critiquing propaganda narratives.

Unlike Facebook, the BBC already has a seasoned staff of media experts who could — and might be willing to — focus their efforts on the broad array of judgments and decisions that attend disseminating news through a social media platform. In practice, this expertise most likely would be leveraged as an input for the BBC’s own algorithms. For example, one can image some randomly selected fraction of news-related content on the platform is evaluated by a BBC editor and rated for quality, and that such ratings would be incorporated into the machine-learning system.

Independence

The BBC’s funding model shields it from coercive economic and political pressure.

The BBC’s funding model shields it from coercive economic and political pressure.

The BBC is funded through a license model, which would insulate a BBC social media platform from having to respond to the incentives that attend online advertising. Without the market pressure to model and predict user behavior for more effective advertising, a BBC-based social media platform could respect privacy rights of users — as its funding model does not depend on user data as fuel for its advertising profit engine. Thus, the BBC could plausibly claim, “We will not surveil or profile you or in any way seek to sell you or anyone else anything about you. You are our client, and you can trust us.”

This independence frees up a BBC platform to select for news stories that do more than entertain and confirm the biases of its users to increase engagement and monitoring. The BBC could tailor its algorithms to promote stories that optimize for other values besides entertainment. There could still be space for news stories that entertain, but the motivations underlying news story selections could also include commitments to other core public values like diversity of information, an informed public and nonfragmented space for public debate.

The license model also insulates the BBC from unwarranted government interference. Because license fees are paid directly by the public and not funded through taxation, the BBC is not necessarily responsive to government demands about how to report events or what events require coverage.

Conclusion

The public sphere is fragmented with partisan (or, at worst, intentionally deceptive) political outlets servicing a substantial proportion of the news content. Facebook — thus far — has not embraced a traditional editorial role to critique false and misleading narratives.

Worse still, Facebook’s current distribution methods either do not address this problem or exacerbate it by serving false content that confirms their suspicions.

Some commentators have recognized the importance for noncommercial platform alternatives, while others have recognized the importance of funding public media content.

A BBC platform combines these ideas to create a more robust solution. A trusted social network platform strips the perverse economic incentives of surveillance capitalism while providing much needed editorial expertise for news creation and distribution on its platform.

Importantly, this doesn’t need to be done by the BBC. Other organizations that are financially independent and have — or are at least willing to cultivate — editorial expertise are in a good spot to develop a trusted social networking platform.

We need our platforms to put people and democratic society ahead of cheap profits. Creating and developing a trusted social network platform does just that.

Brett Frischmann is a professor at Cardozo Law School at Yeshiva University and the Microsoft Visiting Professor of Information and Technology Policy at Princeton University’s Center for Information and Technology Policy. This fall, he will join Villanova University as the Charles Widger Endowed University Professor in Law, Business and Economics. He is an affiliated scholar of the Center for Internet and Society at Stanford Law School and a trustee for the Nexa Center for Internet & Society at Politecnico di Torino, Italy. Reach him @BrettFrischmann.

Mark Verstraete is a privacy and free expression postdoctoral research fellow at the University of Arizona James E. Rogers College of Law and a graduate of Harvard Law School. Reach him @markverstraete.

Sign up for the newsletter Recode Daily Email (required) By signing up, you agree to our Privacy Notice and European users agree to the data transfer policy. Subscribe