On March 30th, 2019 Facebook creator Mark Zuckerberg penned a piece for the Washington Post calling on tech giants, regulators, and international governments to protect users from “harmful content” and to better protect user data.

Zuckerberg writes…

“I believe we need a more active role for governments and regulators. By updating the rules for the Internet, we can preserve what’s best about it — the freedom for people to express themselves and for entrepreneurs to build new things — while also protecting society from broader harms. “From what I’ve learned, I believe we need new regulation in four areas: harmful content, election integrity, privacy and data portability.”

This comes in the wake of the Christchurch, New Zealand shooting, where the shooter livestreamed the mosque shooting on Facebook, which was quickly disseminated throughout the internet. The response form governments and tech giants has basically been to censor various parts of the internet in some regions, and penalize users for sharing the shooter’s manifesto and video, even going as far as to arrest and jail people for doing so.

Zuckerberg goes on to say…

“First, harmful content. Facebook gives everyone a way to use their voice, and that creates real benefits — from sharing experiences to growing movements. As part of this, we have a responsibility to keep people safe on our services. That means deciding what counts as terrorist propaganda, hate speech and more. We continually review our policies with experts, but at our scale we’ll always make mistakes and decisions that people disagree with. “Lawmakers often tell me we have too much power over speech, and frankly I agree. I’ve come to believe that we shouldn’t make so many important decisions about speech on our own. So we’re creating an independent body so people can appeal our decisions. We’re also working with governments, including French officials, on ensuring the effectiveness of content review systems.”

Much like how Microsoft recently proposed creating a centralized regulatory body to filter content, Zuckerberg also suggests a centralized body in which to filter content, writing…

“Internet companies should be accountable for enforcing standards on harmful content. It’s impossible to remove all harmful content from the Internet, but when people use dozens of different sharing services — all with their own policies and processes — we need a more standardized approach. “One idea is for third-party bodies to set standards governing the distribution of harmful content and to measure companies against those standards. Regulation could set baselines for what’s prohibited and require companies to build systems for keeping harmful content to a bare minimum.”

This comes just weeks after Facebook announced that they would be banning “white separatism” and “white supremacy” speech on Facebook and Instagram. On the Facebook newsroom page, the company made it clear…

“Today we’re announcing a ban on praise, support and representation of white nationalism and white separatism on Facebook and Instagram, which we’ll start enforcing next week. It’s clear that these concepts are deeply linked to organized hate groups and have no place on our services.”

The thing is, these sort of measures is what the shooter was hoping for. In his manifesto he talked about tech companies and media organizations, politicians and lawmakers, taking steps to escalate tensions, sow division, and eventually create so much animosity and discord that a civil war breaks out.

Each company and media organization has literally carried out what the shooter wants, in the exact way he wanted it. The only thing that hasn’t happened yet is stricter gun legislation in the United States.

Nevertheless, we continue to see big tech companies pushing to control what sort of content you have access to and how you can access it. Right now they’re posturing through pontification, but it’s just a matter of time before it either becomes standard policy or law.