Facebook will begin removing from its platform false information that is intended to incite violence and other physical harm.

“Reducing the distribution of misinformation—rather than removing it outright—strikes the right balance between free expression and a safe and authentic community," a Facebook spokesperson said in a statement to CNBC. "There are certain forms of misinformation that have contributed to physical harm, and we are making a policy change which will enable us to take that type of content down. We will be begin implementing the policy during the coming months.”

Under the new policy, text and image items that are flagged which have been created or shared with the purpose of immediately "contributing to or exacerbating violence or physical harm" will be removed.

Facebook will work with to-be-named outside local and international organizations as well as its own internal image recognition technologies to help spot these types of offensive items. Parties will have to confirm the information is false, and other groups may be asked to weigh in. Although the policy change is upcoming, the company used these principles to remove posts in Sri Lanka alleging Muslims were poisoning food given or sold to Buddhists.

At the same time, Facebook CEO Mark Zuckerberg has said the company will not remove false items if the company does not think they contribute to violence or physical harm. Instead, the company will de-emphasize the prominence of these items in users' News Feeds.

In an interview with Recode, Zuckerberg reiterated the company shouldn't be "in the business of having people at Facebook who are deciding what is true and what isn’t." But he clarified in certain cases where "divisive information" was maliciously spread the company had a responsibility to step in.

"There are really two core principles at play here," he said. "There’s giving people a voice, so that people can express their opinions. Then, there’s keeping the community safe, which I think is really important. We’re not gonna let people plan violence or attack each other or do bad things. Within this, those principles have real trade-offs and real tug on each other. In this case, we feel like our responsibility is to prevent hoaxes from going viral and being widely distributed."

Zuckerberg also used an example of Holocaust deniers as the type of content that Facebook would not ban. Later, he clarified to Recode, "I personally find Holocaust denial deeply offensive, and I absolutely didn’t intend to defend the intent of people who deny that. Our goal with fake news is not to prevent anyone from saying something untrue — but to stop fake news and misinformation spreading across our services."

Zuckerberg and the company have previously said they want to stay agnostic on what items constitute news. However Facebook's team has been accused of bias in the past, leading it to create teams to prevent personal opinions from swaying its artificial intelligence systems.

- Additional reporting by Julia Boorstin