Get breaking news alerts and special reports. The news and stories that matter, delivered weekday mornings.

YouTube has announced that it will no longer recommend videos that "come close to" violating its community guidelines, such as conspiracy or medically inaccurate videos.

On Saturday, a former engineer for Google, YouTube's parent company, hailed the move as a "historic victory."

The original blog post from YouTube, published on Jan. 25, said that videos the site recommends, usually after a user has viewed one video, would no longer lead just to similar videos and instead would "pull in recommendations from a wider set of topics."

For example, if one person watches one video showing the recipe for snickerdoodles, they may be bombarded with suggestions for other cookie recipe videos. Up until the change, the same scenario would apply to conspiracy videos.

YouTube said in the post that the action is meant to "reduce the spread of content that comes close to — but doesn’t quite cross the line of — violating" its community policies. The examples the company cited include "promoting a phony miracle cure for a serious illness, claiming the earth is flat, or making blatantly false claims about historic events like 9/11."

The change will not affect the videos' availability. And if users are subscribed to a channel that, for instance, produces conspiracy content, or if they search for it, they will still see related recommendations, the company wrote.

Byers Market Newsletter Get breaking news and insider analysis on the rapidly changing world of media and technology right to your inbox. This site is protected by recaptcha

Guillaume Chaslot, a former Google engineer, said that he helped to build the artificial intelligence used to curate recommended videos. In a thread of tweets posted on Saturday, he praised the change.

"It's only the beginning of a more humane technology. Technology that empowers all of us, instead of deceiving the most vulnerable," Chaslot wrote.

Chaslot described how, prior to the change, a user watching conspiracy theory videos was led down a rabbit hole of similar content, which was the intention of the AI he said he helped build.

According to Chaslot, the goal of YouTube's AI was to keep users on the site as long as possible in order to promote more advertisements. When a user was enticed by multiple conspiracy videos, the AI not only became biased by the content the hyper-engaged users were watching, it also kept track of the content that those users were engaging with in an attempt to reproduce that pattern with other users, Chaslot explained.

He pointed to a different artificial intelligence that was also shaped by the bias of its users: Microsoft's chatbot "Tay."

Tay was a Twitter chatbot produced by Microsoft, which was meant to interact with users like a human and learn from others.

Within 24 hours of its release, Tay went from innocent chatbot to full-blown misogynist and racist, according to The Verge. The AI operating Tay learned from and became biased by the engagement it received from Twitter users who were spamming the bot with those ideologies, according to CNBC.

Chaslot said that YouTube's fix to its recommendations AI will have to include getting people to videos with truthful information and overhauling the current system it uses to recommend videos.

"The AI change will have a huge impact because affected channels have billions of views, overwhelmingly coming from recommendations," Chaslot said, adding that the platform's decision to make this change affect thousands of new users.

YouTube did not immediately respond to a request for comment on Chaslot's thread.