Social media giant Twitter has issued an update to its “hateful conduct” rules to further ban tweets that may target different age groups, and the disabled.

In a recent blog post social media site Twitter announced a number of changes to its “hateful conduct” rules relating to tweet that include language that “dehumanizes on the basis of age, disability or disease.”

Twitter states in a blog post that Tweets similar to the ones posted below will be removed from Twitter when they’re reported:

The firm clarified why it started with those specific groups to protect, writing: “In 2018, we asked for feedback to ensure we considered a wide range of perspectives and to hear directly from the different communities and cultures who use Twitter around the globe. In two weeks, we received more than 8,000 responses from people located in more than 30 countries.”

Twitter stated that the most consistent feedback it received relating to hateful conduct included:

Clearer language — Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. We incorporated this feedback when refining this rule, and also made sure that we provided additional detail and clarity across all our rules.

— Across languages, people believed the proposed change could be improved by providing more details, examples of violations, and explanations for when and how context is considered. We incorporated this feedback when refining this rule, and also made sure that we provided additional detail and clarity across all our rules. Narrow down what’s considered — Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as “kittens” and “monsters.”

— Respondents said that “identifiable groups” was too broad, and they should be allowed to engage with political groups, hate groups, and other non-marginalized groups with this type of language. Many people wanted to “call out hate groups in any way, any time, without fear.” In other instances, people wanted to be able to refer to fans, friends and followers in endearing terms, such as “kittens” and “monsters.” Consistent enforcement — Many people raised concerns about our ability to enforce our rules fairly and consistently, so we developed a longer, more in-depth training process with our teams to make sure they were better informed when reviewing reports. For this update it was especially important to spend time reviewing examples of what could potentially go against this rule, due to the shift we outlined earlier.

The platform stated that all of these changes build on its work with the Trust and Safety Council which was designed to reduce “toxicity” on Twitter’s platform. One Twitter user questioned whether the phrase “Ok boomer,” a popular phrase aimed at members of the baby boom generation, would be banned as a result of the changes:

I jumped the gun. “OK BOOMER” is safe. They should all just fck off back to Boomerbook! pic.twitter.com/Xt0S7MJsoU — Berlusconi Fever (@MechaSilvio) March 6, 2020