We all know that social media is basically Internet Relay Chat (IRC) for the twenty-first century, in that instead of going to a website with updated content, new items scroll across the screen in order to keep the user constantly entertained; it is "push" content, where without user interaction new material appears, instead of "pull," where the user must select content or content areas to see what updates have been posted.

Social media sites face a difficult reality: users have different interests and some users are abusive. Over the past decade, social media sites have attempted to equate the two, so when values clash, the minority view -- whichever opinion is shared by the fewest number of people -- is removed. This has accelerated with recent changes to the rules on sites like Reddit and Twitter, who are trying to internally legislate away content that upsets people.

Wired tells us that Twitter has implemented new filtering rules designed to weed out controversial content:

Last week the company disabled features of actor Rose McGowan's account at a crucial moment amid the Harvey Weinstein sexual misconduct scandal. Groups of women boycotted the site for a day in protest. Twitter's typical response to complaints about hate and harassment is to affirm its commitment to transparency. The new plans stop short of sweeping measures, such as banning pornography or specific groups like Nazis. Rather, they offer expanded features like allowing observers of unwanted sexual advances—as well as victims—to report them, and expanded definitions, such as including "creep shots" and hidden camera content under the definition of "nonconsensual nudity." The company also plans to hide hate symbols behind a "sensitive image" warning, though it has not yet defined what qualifies as a hate symbol. Twitter also says it will take unspecified enforcement actions against "organizations that use/have historically used violence as a means to advance their cause."

The leaked email to Twitter's "Trust and Safety" council includes new rules against content that "glorifies" violence:

Tweets that glorify violence (new)* We already take enforcement action against direct violent threats (“I’m going to kill you”), vague violent threats (“Someone should kill you”) and wishes/hopes of serious physical harm, death, or disease (“I hope someone kills you”). Moving forward, we will also take action against content that glorifies (“Praise be to for shooting up. He’s a hero!”) and/or condones (“Murdering makes sense. That way they won’t be a drain on social services”).

And so now we have gone into a dangerous grey area where any opinion which can be seen as approving of a violent act in the past will be censored, at least if it conflicts with the majority opinion on social media. Twitter makes it clear that this will be used specifically against some groups who are associated with violence in the past, but again, this is a matter of interpretation, and can and will be selectively enforced:

Violent groups (new)* We are still defining the exact scope of what will be covered by this policy. At a high level, we will take enforcement action against organizations that use/have historically used violence as a means to advance their cause. More details to come here as well (including insight into the factors we will consider to identify such groups).

In human history, every group has used violence at some point to either advance or defend itself, mainly because violence is what resolves conflicts when reasoning it out does not work. Who do we think Twitter will enforce against, the Communist party or the Right-wing? Since Twitter shows a generalized bias in favor of the Left, it is likely that the Left will not see discrimination.

As if trying to prove this, Reddit recently launched its own version of the anti-glorification policy, and promptly applied it unevenly in order to defend Leftists and Leftist groups from recognition of their violence. Here is the the new Reddit anti-glorification policy:

Do not post content that encourages, glorifies, incites, or calls for violence or physical harm against an individual or a group of people; likewise, do not post content that glorifies or encourages the abuse of animals. We understand there are sometimes reasons to post violent content (e.g., educational, newsworthy, artistic, satire, documentary, etc.) so if you’re going to post something violent in nature that does not violate these terms, ensure you provide context to the viewer so the reason for posting is clear.

As users, including some on the Left, immediately noticed, this policy is even more vague than the previous, suggesting a pretext for selective enforcement:

Reddit then proved them correct by removing a post which pointed out that Leftist content calling for violence is abundant on Reddit and admins do nothing against it:

The point worth taking here is that these statements were against Reddit's old anti-violence rules, but presumably because they were made by Leftists, were not removed, while Rightist comments were.

We can expect nothing but the same in the future from Reddit and Twitter: one-sided enforcement to advance their own ideological agenda, or that of those that they employ, who presumably are not the winners of the dot-com bubble, meaning that they are not the people not getting paid millions to innovate but getting paid entry-level salaries to clean up the mess.

If social media had any foresight, it would get out of this game before it debunks itself as biased.