Call it the Alex Jones effect.

Twitter announced on Tuesday that it will update its rules to prohibit "dehumanizing language" on the platform. While it is still finalizing what this policy will look like through user feedback and internal review processes, this initiative could add a much-needed layer of clarity to Twitter's sometimes opaque, yet narrow, rule violation policy.

"We want to expand our hateful conduct policy to include content that dehumanizes others based on their membership in an identifiable group, even when the material does not include a direct target," Twitter's Vijaya Gadde and Del Harvey wrote in a blog post.

Our hateful conduct policy is expanding to address dehumanizing language and how it can lead to real-world harm. The Twitter Rules should be easier to understand so we’re trying something new and asking you to be part of the development process. Read more and submit feedback. — Twitter Safety (@TwitterSafety) September 25, 2018

Twitter defines dehumanizing language as speech that denies people of human qualities through malicious comparisons to animals or objects. It cites research that links dehumanizing speech to real-world violence.

The policy has reportedly been in the works for the past three months, and it's part of the company's larger initiative to "improve conversational health" on the platform. Twitter is still actively working to define what "conversational health" actually means. But it has already taken several proactive steps that have shown a marked difference in decreasing trolling and bullying on the platform.

Encouragingly, Twitter is basing its dehumanizing language policy on academic research that shows the real world effect that demeaning people through dehumanization can have. For example, speech that equates a group with an animal or object is a "hallmark of dangerous speech, because it can make violence seem acceptable."

Twitter users will have until Tuesday, October 9, at 6:00am PST to provide Twitter with feedback on the policy (you can do so here). Twitter then plans to update the rules "later this year."

We’re experimenting with a new way to write and roll-out policy and rules. Let us know what you think… https://t.co/2es2eMayGU — jack (@jack) September 25, 2018

It's difficult not to draw a connection between this new policy and the back-and-forth controversy surrounding Twitter's eventual banning of Alex Jones. During that time, many criticized the way that Twitter was applying its hate speech policy to Jones; that is, it wasn't. This new policy further defines the kind of speech disallowed by twitter, and broadens the scope of harmful speech beyond needing to include an individual "@." Harvey and Gadde indeed addressed this need for further elucidation.

"There are still Tweets many people consider to be abusive, even when they do not break our rules," the authors write. "Better addressing this gap is part of our work to serve a healthy public conversation."

Mashable has reached out to Twitter to ask whether this policy will be enforceable retroactively, or if it will only apply going forward. We happen to know a certain Twitter user who enjoys comparing women to dogs to whom this new policy might apply.