Members of the Myanmar military have systematically used Facebook as a tool in the government’s campaign of ethnic cleansing against Myanmar’s Rohingya Muslim minority, according to an incredible piece of reporting by the New York Times on Oct. 15. The Times writes that the military harnessed Facebook over a period of years to disseminate hate propaganda, false news and inflammatory posts. The story adds to the horrors known about the ongoing violence in Myanmar, but it also should complicate the ongoing debate about Facebook’s role and responsibility for spreading hate and exacerbating conflict in Myanmar and other developing countries.

Context: The Atrocities in Myanmar

The Times report comes in the context of growing calls for accountability for the campaign of violence inflicted on the Rohingya. On Sep. 12, the U.N.-commissioned independent Fact Finding Mission (FFM) released its final report, which called for members of the Myanmar military to be investigated and prosecuted for genocide, crimes against humanity and war crimes. The U.S. State Department also released a report documenting evidence that the military’s operations were “well-planned and coordinated.” As these reports show, the atrocities in Myanmar have become one of the world’s most pressing human rights situations. The FFM concludes that the exact number of casualties from the “widespread, systematic and brutal” killings may never be known, but is more than 10,000. The over 400-page FFM report contains devastating accounts of wide-ranging crimes against humanity, including torture, rape, persecution and enslavement. Hundreds of thousands of people remain displaced.

Alongside this developing body of evidence and consensus about the crimes that have been committed in Myanmar, debate has raged over Facebook’s role in these events. In recent months, Facebook has taken steps to accept its role and responsibility. In a surprising concession before the Senate intelligence committee in September 2018, Chief Operating Officer Sheryl Sandberg even accepted that Facebook may have a legal obligation to take down accounts that incentivize violence in countries like Myanmar. Sandberg called the situation “devastating” and acknowledged that the company needed to do more, but highlighted that Facebook had put increased resources behind being able to review content in Burmese. Shortly before the hearing, Facebook announced that it had taken the unusual step of removing a number of pages and accounts linked to the Myanmar military for “coordinated inauthentic behaviour” and in order to prevent them from “further inflam[ing] ethnic and religious tensions.”

The FFM’s report did acknowledge that Facebook’s responsiveness had “improved in recent months” but found that overall the company’s response had been “slow and ineffective.” It called for an independent examination of the extent to which Facebook posts and messages had increased discrimination and violence.

The New York Times report

Paul Mozur’s recent reporting in the Times recounts how as many as 700 people worked shifts in a secretive operation started by Myanmar’s military several years ago. The military personnel developed large followings for fake pages and accounts with no visible connection to the military, which they then flooded with hate propaganda and disinformation. To encourage people to turn to the military for protection, the pages often aimed at stoking ethnic tensions and generating feelings of vulnerability. Although this particular campaign is half a decade old, it continues a long practice of Myanmar’s military engaging in psychological warfare, employing techniques learned by officers sent to Russia for training.

Following the publication of the Times story, Facebook announced it was removing more “seemingly independent entertainment, beauty and informational Pages” that were being used to push military propaganda. Altogether, the pages had about 1.35 million followers.

Hate speech and social media in the context of mass atrocities

The extent to which hate speech and propaganda can be said to factually and legally cause mass atrocities is a complicated issue. Jonathan Leader Maynard and Susan Benesch have observed that it is “one of the most underdeveloped components of genocide and atrocity prevention, in both theory and practice”—and that’s before social media enters the picture. As Zeynep Tufekci tweeted years ago, Myanmar may well be the first social-media fueled ethnic cleansing. International law hasn’t even begun to grapple with how to take into account the role of social media in unravelling and imposing responsibility for international crimes.

There is a long road ahead if international law is to do so now. Challenges include evidence-gathering when fact-finders are refused access by the government, issues to do with the International Criminal Court’s jurisdiction that may result in partial accountability, as well as the inherent conceptual difficulties of line-drawing when finding the nexus between speech and violence. The case law on speech in the context of genocide has developed in a piecemeal fashion, resulting in inconsistencies and incoherence. The road to accountability in Myanmar may offer an opportunity to develop and clarify these rules, as well as wrestle with how social media fits in. The FFM report provides a starting place, concluding that there is “no doubt that the prevalence of hate speech in Myanmar significantly contributed to increased tension and a climate in which individuals and groups may become more receptive to incitement and calls for violence” and “[t]he role of social media is significant.”

Complicating the narrative

The FFM has called for more work to be done to understand the effects of Facebook on the spread of violence in Myanmar. Reporting shows that Facebook was an “absentee landlord” in Myanmar. It ignored warnings about the abuse of its platform in the country for years, engaged poorly with civil society actors and generally—as the company itself has admitted—has been far too slow to act in the context of horrific crimes. For this reason, observers have compared Facebook to providing a match or tinder in the uniquely explosive environment of Myanmar.

But this analysis may need updating in light of the new evidence that the spread of anti-Rohingya misinformation across Facebook was not merely organic, but the result of systematic and covert exploitation by the military. As noted by Daphne Keller, the Director of Intermediary Liability at Stanford’s Center for Internet and Society and former Associate General Counsel at Google, responding to problems resulting from innate structural flaws of social media requires a “different analysis and response” than a response to a calculated exploit by bad actors.

In these early days of trying to untangle the role of Facebook in the horrors inflicted against the Rohingya minority it’s worth carefully examining the issues raised by the FFM report and the NYT reporting:

These are enormously difficult issues, and the important effort of getting answers will require a large amount of work and cooperation by Facebook itself as well as outside researchers.

Even if Facebook cooperates, there is also the question of what liability the company might face for its role enabling mass atrocities. As a private company, Facebook is not subject to international criminal liability. Yet calls have grown louder for the platform to face some sort of penalty. During Sandberg’s testimony before the Senate in September, Senator Mark Warner seized on her acknowledgment of legal obligation in Myanmar, stating that social media companies that had not acted responsibly should be subject to sanction. But it is unclear what this responsibility or sanction would look like within the United States—much less the world.