Twitter has been trying to make its platform a better place for users. But in doing so they have built some tools that are adversely affecting conservative users. This seems to be an unintended consequence of attempts to identify and punish bad behavior. But the rationale is less important than the outcome, which is viewpoint discrimination.

CEO Jack Dorsey has admitted they have a problem.

We need to constantly show that we are not adding our own bias, which I fully admit is more left-leaning,” he added. “And I think it’s important to articulate our own bias and to share it with people so that people understand us. But we need to remove our bias from how we act and our policies and our enforcement.”

Twitter must fix this and the following fact sheet outlines the problem, the damage it is doing, the necessary steps to rectify this and the potential problems for Twitter if they do not.

Twitter & Viewpoint Discrimination

Fact Sheet

Social media has become one of the most popular ways for people to communicate and stay informed. Twitter calls itself “a global platform for public self-expression and conversation in real time. Twitter allows people to consume, create, distribute and discover content and has democratized content creation and distribution.”

Twitter Basics

Tweets

A tweet is a message of 280 characters or less that may contain pictures or video.

Timeline

A user’s timeline is a ticker style collection of tweets from other people that is constantly updating.

Hashtags

#Hashtags allow users to collect and view all tweets that use the word(s) in the hashtag

Follow

Following other users puts their tweets into your timeline.

Mute and Block

Muting another user stops their tweets from showing in your timeline. Blocking another user means they can’t see your tweets or put any tweets in your timeline.

Trolls and Bots

Trolls are users who stir up controversy by mocking and attacking content & other users. Bots are accounts that automatically create tweets on certain topics.

Key Differentiator

Twitter allows direct access to prominent individuals and organizations in a very public way. Tweets adding a username like @realDonaldTrump are included in the timeline of that account and are visible to the people who follow it. This allow both praise and criticism to be delivered in a virtual public square. This has generated much good, and bad, debate and also the opportunity for harassment and abuse to occur.

Health of the Platform

The balance between free expression and direct moderation of commentary has been a difficult challenge for Twitter. The open and largely unregulated ability to speak to anyone on any topic, anonymously if desired, appeals greatly to may users. Others dislike the sometimes unruly and rude nature of some conversation.

Twitter has attempted to define rules for acceptable behavior and mechanisms to enforce this fairly. They have used a mix of algorithms and human oversight. The results have been mixed at best and neither those who favor free speech or more civility have been pleased. In addition, some of the methods used have created viewpoint discrimination which is unacceptable.

Twitter has developed multiple tools designed to create a better user experience. While the goal may have been to block spam, bots and abusive accounts, the result has been disproportionately felt by users with conservative political views. This has been called shadow banning and Twitter claims it does not do this purposely. But the effect of limiting the reach of conservative accounts is real, if not intentional.

Quality Filter Discrimination (QFD)

The main culprit is the Quality Filter which uses a number of metrics to determine if an account should be deemed low quality or a bad faith actor. When this rating occurs, the visibility of tweets from these accounts is severely limited. Some of the factors Twitter uses make sense:

Specific account properties that indicate authenticity (e.g. whether you have a confirmed email address, how recently your account was created, whether you uploaded a profile image, etc.)

But some have had unintended consequences, including:

What actions you take on Twitter(e.g. who you follow, retweet, etc.) How other accounts interact with you(e.g. who mutes, follows, retweets, blocks you, etc.)

The use of “who blocks you” as a metric has allowed liberal groups to institute a Hecklers Veto by attacking large numbers of conservative accounts with reports of violations or abuse and most damagingly by the use of mass block lists.

Block Lists

Twitter instituted the capability to use lists created by users to mass block large numbers of accounts in 2015. Groups on the political left have since generated massive lists of conservative users. These lists have been used extensively to generate large numbers of bad quality marks against conservative accounts, most of which the user applying the block list has never even heard of. This phenomenon has been known for years and written about by liberal researchers including this paper from UC Berkeley.

Guilt by Association

Rating accounts based on their interests and connections creates viewpoint discrimination.

Using “who follows you”, “who you follow” and “who you retweet” creates a guilt by association.

and creates a guilt by association. It takes a small number of accounts with a low quality rating and extends that shadow to others who have interactions with them.

This generates a dynamic that constantly increases the number of affected accounts based simply on an affinity for conservative ideas.

Again, while the intent may have been to improve user experience, some of these metrics had an easily predictable outcome of disadvantaging right of center users. There is a well-documented tendency of left of center users to report and use blocking and reporting tools considerably more than the right, up to three times as much. This has created the situation now where Quality Filter Discrimination (QFD) is de facto viewpoint discrimination.

Damages

Twitter is not a paid service, but has become ubiquitous enough that a robust Twitter presence is a necessity for public organizations and individuals.

Accounts placed under a QFD ban have suffered a loss of visibility and have wasted time and resources applied to using Twitter as a means of promoting their ideas.

Republican politicians given QFD bans have been limited in the ability to reach their constituents unfairly advantaging their Democrat opponents.

In a Frankenstein’s Monster effect, the block list tool Twitter built to satisfy some user’s desires to insulate themselves from content they dislike, is now being used against Twitter itself. A user named Shannon Coulter @shannoncoulter has created a block list of all Fortune 500 corporations with a Twitter presence and is quite successfully calling for others to use it to pressure Twitter to ban Alex Jones.

Remedies

Instituting the following changes is a needed start to ending viewpoint discrimination:

Stop the use of blocks generated by block lists as a metric in the Quality Filter

Stop the use of abuse reports as a metric or account for the disparity in their use by partisans

Stop the use of guilt by association metrics in the Quality Filter

Turn off the QFD filter by default; allow users to manually activate customizable filters.

Add conservative voices to the Trust & Safety Council

Potential Consequences

Twitter is aware of QFD and can longer honestly claim it runs a content neutral platform. Statements by Twitter CEO Jack Dorsey saying there is no shadow ban may be called into question based on these facts.

House Majority Leader Kevin McCarthy has called for Dorsey to testify to Congress about this issue and while conversations have begun, there are major questions about Twitter’s commitment to fix this.

Twitter opens itself to a number of detrimental actions if it fails to enact remedies to these problems.