Photo : Francois Mori ( AP )

In a blog post yesterday evening, social media giant Facebook announced several changes meant to mitigate the effects of misinformation and hate speech causing real-world violence, including the formation of a Strategic Response team meant to stop a repeat of what happened in Myanmar.




Myanmar, where less than 1-percent of the population had internet access as recently as 2014, became a Gods Must Be Crazy-like testbed for Facebook. For many in the region, Facebook is their one and only means to get information online. It’s also, in Facebook’s own words, “the only country in the world with a significant online presence that hasn’t standardized on Unicode”—meaning many of the artificial intelligence tools meant to weed out harmful content simply didn’t work there. Couple that with a lack of native speakers acting as moderators, weak civil liberties, and racial tensions and it’s little wonder Facebook played a starring role in stirring up an ongoing genocide there.


Similar incidents have cropped up in Sri Lanka, India, and elsewhere. The company promised last November, as it always seems to be doing lately, to do better.

The first fruits of those labors are the Strategic Response team—a group of individuals who, according to reporting by NBC, include “former diplomats, human rights researchers, a former military intelligence officer and one person who advised oil giant BP on geopolitical risk.” Facebook did not share how many people actually comprise the team, but a member of the Myanmar Tech Accountability Network who has worked with the team claimed they number less than ten. Facebook first became aware of its complicity in Myanmar’s Rohingya genocide at least five years ago.

(Speaking to NBC, the head of the Strategic Response team, Rosa Birch, said, “there’s a lot of similarities there between government and military and Facebook,” an alarming statement at a time when Facebook is also working on minting its own money.)

Surely Facebook is doing more than putting together some perfunctory advisory committee a la its ridiculous PR plays at election security “war rooms,” right? Barely. The company’s top agenda item is that it will be “removing bad actors and bad content.” Some version of that phrase has appeared in dozens of press releases from Facebook—second only to maybe “continuing to improve”—and doesn’t inspire much confidence compared to the seriousness of—and I can’t stress this enough—literal genocide.


There’s some cause for optimism, in that Facebook’s (self-reported) stats on catching bad content globally went up by around 8-percent; Facebook is also in the process of staffing up Burmese speakers transitioning Myanmar to Unicode . Hand i n glove with actually enforcing its own long-standing policies, the company claims it’s continuing to limit distribution of content that approaches what’s off limits. Reason being: internal research suggested that “borderline” content was being rewarded with increased engagement, incentivizing the exact wrong sort of behavior.

The only discreet change Facebook deigned to tell users that it’s willing to make, though, is “adding friction” to Facebook Messenger by limiting the number of times a message can be forwarded. Chain messages, including those containing images and memes meant to inspire violence, were at the heart of issues in Myanmar, Sri Lanka, and let’s be honest, probably America too. According to TechCrunch (Facebook didn’t even bother to quantify it in its own post), the limit is five people, the same limit it places on WhatsApp message forwarding in India.


Forgive me if none of these solutions seem up to the task of undoing the damage Facebook’s misinformation machine has already wrought abroad. As an unrelated reminder, here’s the link for deleting your Facebook account, if you still have one and feel so inclined.