Facebook has produced a report summarizing feedback it’s taken in on its idea of establishing a content oversight board to help arbitrate on moderation decisions.

Aka the ‘supreme court of Facebook’ concept first discussed by founder Mark Zuckerberg last year, when he told Vox:

[O]ver the long term, what I’d really like to get to is an independent appeal. So maybe folks at Facebook make the first decision based on the community standards that are outlined, and then people can get a second opinion. You can imagine some sort of structure, almost like a Supreme Court, that is made up of independent folks who don’t work for Facebook, who ultimately make the final judgment call on what should be acceptable speech in a community that reflects the social norms and values of people all around the world.

Facebook has since suggested the oversight board will be up and running later this year. And has just wheeled out its global head of policy and spin for a European PR push to convince regional governments to give it room for self-regulation 2.0, rather than slapping it with broadcast-style regulations.

The latest report, which follows a draft charter unveiled in January, rounds up input fed to Facebook via six “in-depth” workshops and 22 roundtables convened by Facebook and held in locations of its choosing around the world.

In all, Facebook says the events were attended by 650+ people from 88 different countries — though it further qualifies that by saying it had “personal discussions” with more than 250 people and received more than 1,200 public consultation submissions.

“In each of these engagements, the questions outlined in the draft charter led to thoughtful discussions with global perspectives, pushing us to consider multiple angles for how this board could function and be designed,” Facebook writes.

It goes without saying that this input represents a minuscule fraction of the actual ‘population’ of Facebook’s eponymous platform, which now exceeds 2.2BN accounts (an unknown portion of which will be fake/duplicates), while its operations stretch to more than double the number of markets represented by individuals at the events.

The feedback exercise — as indeed the concept of the board itself — is inevitably an exercise in opinion abstraction. Which gives Facebook leeway to shape the output as it prefers. (And, indeed, the full report notes that “some found this public consultation ‘not nearly iterative enough, nor transparent enough, to provide any legitimacy’ to the process of creating the Board”.)

In a blog post providing its spin on the “global feedback and input”, Facebook culls three “general themes” it claims emerged from the various discussions and submissions — namely that:

People want a board that exercises independent judgment — not judgment influenced by Facebook management, governments or third parties , writing: “The board will need a strong foundation for its decision-making, a set of higher-order principles — informed by free expression and international human rights law — that it can refer to when prioritizing values like safety and voice, privacy and equality”. Though the full report flags up the challenge of ensuring the sought for independence, and it’s not clear Facebook will be able to create a structure that can stand apart from its own company or indeed other lobbyists

, writing: “The board will need a strong foundation for its decision-making, a set of higher-order principles — informed by free expression and international human rights law — that it can refer to when prioritizing values like safety and voice, privacy and equality”. Though the full report flags up the challenge of ensuring the sought for independence, and it’s not clear Facebook will be able to create a structure that can stand apart from its own company or indeed other lobbyists How the board will select and hear cases, deliberate together, come to a decision and communicate its recommendations both to Facebook and the public are key considerations — though those vital details remain tbc. “In making its decisions, the board may need to consult experts with specific cultural knowledge, technical expertise and an understanding of content moderation,” Facebook suggests, implying the boundaries of the board are unlikely to be firmly fixed

— though those vital details remain tbc. “In making its decisions, the board may need to consult experts with specific cultural knowledge, technical expertise and an understanding of content moderation,” Facebook suggests, implying the boundaries of the board are unlikely to be firmly fixed People also want a board that’s “as diverse as the many people on Facebook and Instagram” — the problem being that’s clearly impossible, given the planet-spanning size of Facebook platforms. Another desire Facebook highlights is for the board to be able to encourage it to make “better, more transparent decisions”. The need for board decisions (and indeed decisions Facebook takes when setting up the board) to be transparent emerges as a major theme in the report. In terms of the board’s make-up, Facebook says it should comprise experts with different backgrounds, different disciplines, and different viewpoints — “who can all represent the interests of a global community”. Though there’s clearly going to be differing views on how or even whether that’s possible to achieve; and therefore questions over how a 40-odd member body, that will likely rarely sit in plenary, can plausibly act as an prism for Facebook’s user-base

The report is worth reading in full to get a sense of the broad spectrum of governance questions and conundrums Facebook is here wading into.

If, as it very much looks, this is a Facebook-configured exercise in blame spreading for the problems its platform hosts, the surface area for disagreement and dispute will clearly be massive — and from the company’s point of view that already looks like a win. Given how, since 2016, Facebook (and Zuckerberg) have been the conduit for so much public and political anger linked to the spreading and accelerating of harmful online content.

Differing opinions and will also provide cover for Facebook to justify starting “narrow”. Which it has said it will do with the board, aiming to have something up and running by the end of this year. But that just means it’ll be managing expectations of how little actual oversight will flow right from the very start.

The report also shows that Facebook’s claimed ‘listening ear’ for a “global perspective” has some very hard limits.

So while those involved in the consultation are reported to have repeatedly suggested the oversight board should not just be limited to content judgement — but should also be able to make binding decisions related to things like Facebook’s newsfeed algorithm or wider use of AI by the company — Facebook works to shut those suggestions down, underscoring the scope of the oversight will be limited to content.

“The subtitle of the Draft Charter — “An Oversight Board for Content Decisions” — made clear that this body would focus specifically on content. In this regard, Facebook has been relatively clear about the Board’s scope and remit,” it writes. “However, throughout the consultation period, interlocutors often proposed that the Board hear a wide range of controversial and emerging issues: newsfeed ranking, data privacy, issues of local law, artificial intelligence, advertising policies, and so on.”

It goes on to admit that “the question persisted: should the Board be restricted to content decisions only, without much real influence over policy?” — before picking a selection of responses that appear intended to fuzz the issue, allowing it to position itself as seeking a reasoned middle ground.

“In the end, balance will be needed; Facebook will need to resolve tensions between minimalist and maximalist visions of the Board,” it concludes. “Above all, it will have to demonstrate that the Oversight Board — as an enterprise worth doing — adds value, is relevant, and represents a step forward from content governance as it stands today.”

Sample cases the report suggests the board could review — as suggested by participants in Facebook’s consultation — include:

A user shared a list of men working in academia, who were accused of engaging in inappropriate behavior and/or abuse, including unwanted sexual advances;

A Page that commonly uses memes and other forms of satire shared posts that used discriminatory remarks to describe a particular demographic group in India;

A candidate for office made strong, disparaging remarks to an unknown passerby regarding their gender identity and livestreamed the interaction. Other users reported this due to safety concerns for the latter person;

A government official suggested that a local minority group needed to be cautious, comparing that group’s behavior to that of other groups that have faced genocide

So, again, it’s easy to see the kinds of controversies and indeed criticisms that individuals sitting on Facebook’s board will be opening themselves up to — whichever way their decisions fall.

A content review board that will inevitably remain linked to (if not also reimbursed via) the company that establishes it, and will not be granted powers to set wider Facebook policy — but will instead be tasked with facing the impossible of trying to please all of the Facebook users (and critics) all of the time — does certainly risk looking like Facebook’s stooge; a conduit for channeling dirty and political content problems that have the potential to go viral and threaten its continued ability to monetize the stuff that’s uploaded to its platforms.

Facebook’s preferred choice of phrase to describe its users — “global community” — is a tellingly flat one in this regard.

The company conspicuously avoids talk of communities, plural — instead the closest we get here is a claim that its selective consultation exercise is “ensuring a global perspective”, as if a singular essence can somehow be distilled from a non-representative sample of human opinion — when in fact the stuff that flows across its platforms is quite the opposite; multitudes of perspectives from individuals and communities whose shared use of Facebook does not an emergent ‘global community’ make.

This is why Facebook has struggled to impose a single set of ‘community standards’ across a platform that spans so many contexts; a one-size-fits all approach very clearly doesn’t fit.

Yet it’s not at all clear how Facebook creating yet another layer of content review changes anything much for that challenge — unless the oversight body is mostly intended to act as a human shield for the company itself, putting a firewall between it and certain highly controversial content; aka Facebook’s supreme court of taking the blame on its behalf.

Just one of the difficult content moderation issues embedded in the businesses of sociotechnical, planet-spanning social media platform giants like Facebook — hate speech — defies a top-down ‘global’ fix.

As Evelyn Douek wrote last year vis-a-via hate speech on the Lawfare blog, after Zuckerberg had floated the idea of a governance structure for online speech: “Even if it were possible to draw clear jurisdictional lines and create robust rules for what constitutes hate speech in countries across the globe, this is only the beginning of the problem: within each jurisdiction, hate speech is deeply context-dependent… This context dependence presents a practically insuperable problem for a platform with over 2 billion users uploading vast amounts of material every second.”

A cynic would say Facebook knows it can’t fix planet-scale content moderation and still turn a profit. So it needs a way to distract attention and shift blame.

If it can get enough outsiders to buy into its oversight board — allowing it to pass off the oxymoron of “global governance”, via whatever self-styled structure it allows to emerge from these self-regulatory seeds — the company’s hope must be that the device also works as a bolster against political pressure.

Both over particular problem/controversial content, and also as a vehicle to shrink the space for governments to regulate Facebook.

In a video discussion also embedded in Facebook’s blog post — in which Zuckerberg couches the oversight board project as “a big experiment that we hope can pioneer a new model for the governance of speech on the Internet” — the Facebook founder also makes reference to calls he’s made for more regulation of the Internet. As he does so he immediately qualifies the statement by blending state regulation with industry self-regulation — saying the kind of regulation he’s asking for is “in some cases by democratic process, in other cases through independent industry process”.

So Zuckerberg is making a clear pitch to position Facebook as above the rule of nation state law — and setting up a “global governance” layer is the self-serving vehicle of choice for the company to try and overtake democracy.

Even if Facebook’s oversight board’s structure is so cunningly fashioned as to present to a rationally minded individual as, in some senses, ‘independent’ from Facebook, its entire being and function will remain dependent on Facebook’s continued existence.

Whereas if individual markets impose their own statutory regulations on Internet platforms, based on democratic and societal principles, Facebook will have no control over the rules they impose, direct or otherwise — with uncontrolled compliance costs falling on its business.

It’s easy to see which model sits most easily with Zuckerberg the businessman — a man who has also demonstrated he will not be held personally accountable for what happens on his platform.

Not when he’s asked by one (non-US) parliament, nor even by representatives from nine parliaments — all keen to discuss the societal fallouts of political disinformation and hate speech spread and accelerated on Facebook.

Turns out that’s not the kind of ‘global perspective’ Facebook wants to sell you.