The spread of a video across the internet that was apparently recorded by a shooter who killed 50 people at two mosques in Christchurch, New Zealand, has reignited a debate around how tech companies moderate their platforms — and whether they've done enough to crack down on the spread of white supremacists online.

Critics of the companies, led by U.K. politicians, say that Facebook and YouTube have not done enough to address white supremacist groups on their platforms, pointing to a successful effort to control Islamic extremist content on the sites as proof that the problem is well within the power of the companies.

Those calls have been countered by warnings from some in the tech industry who say that pushing tech companies to further regulate extremism will not fix the deeper problems of online radicalization.

Digital copies of the video, which was captured from a Facebook livestream, were repeatedly uploaded to Facebook, Twitter and YouTube in the hours after the attack, though most versions were removed by Friday afternoon in the U.S. The shooter's manifesto referred to a variety of internet-based influences, leaving behind a pattern familiar to researchers who study online radicalization.

New Zealand Prime Minister Jacinda Ardern said Tuesday that she will look at the role social media companies played in the attack.

“We cannot simply sit back and accept that these platforms just exist and what is said is not the responsibility of the place where they are published,” she said during a speech before the New Zealand Parliament. “They are the publisher, not just the postman. There cannot be a case of all profit, no responsibility.”

Later on Tuesday, U.S. House Homeland Security Committee Chairman Bennie Thompson, D-Miss., asked leaders of the major tech companies to participate in a briefing to discuss the spread of the video.

"Your companies must prioritize responding to these toxic and violent ideologies with resources and attention," Thompson wrote in a letter. "If you are unwilling to do so, Congress must consider policies to ensure that terrorist content is not distributed on your platforms—including by studying the examples being set by other countries."

Facebook on Saturday announced that the company had blocked 1.2 million uploads of the video, while it removed another 300,000 videos. The number of videos highlights the scale of the challenge that Facebook and other internet platforms face after having spent years promoting themselves as destinations for wide ranges of video while also taking a light-touch approach to moderation.

In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload... — Facebook Newsroom (@fbnewsroom) March 17, 2019

YouTube also worked to remove the many uploads that its platform received, with the company taking a variety of steps to block the video. It automatically rejected any footage that included acts of violence from the shootings, the company said in an emailed statement.

"Since Friday’s horrific tragedy, we’ve removed tens of thousands of videos and terminated hundreds of accounts created to promote or glorify the shooter," the company said in the statement. "The volume of related videos uploaded to YouTube in the 24 hours after the attack was unprecedented both in scale and speed, at times as fast as a new upload every second."

YouTube also suspended some search functions, such as being able to sort videos by most recently uploaded.

The debate on social media, which started over whether the platforms had taken adequate steps to stop the spread of the video, turned over the weekend to the issue of whether the companies were doing enough to address the rise of extremist content targeting Muslims, Jews and other ethnic groups.

Much of the criticism has come from politicians in the U.K., who have taken the lead on calling for regulation of tech companies.

Tom Watson, a member of British Parliament and deputy leader of the Labour Party, wrote in a blog post that the shootings "crystallised the case for social media regulation."

"It is clear that social media platforms, by aiding and abetting the publicity of the crime, have become part of the terrorists' armoury," Watson wrote.

Sajid Javid, Britain's interior minister, said in a tweet that YouTube, Google, Facebook and Twitter need to "take some ownership" of the extremism promoted on their platforms.

"Enough is enough," Javid wrote.

Some pointed to the effort made to stop the spread of Islamic extremists, including ISIS, which used the internet to spread its ideology and recruit followers.

"Google, Twitter and Facebook *already* have the capacity to stop the spread of extremist content online," Samuel Sinyangwe, co-founder of the police reform organization Campaign Zero, said in a tweet. "They’ve successfully removed ISIS content on their platforms for *years.* They just made a choice to not use every tool available to them to stop white supremacist terrorism."

AirAsia CEO Tony Fernandes announced that he was quitting Facebook, tweeting that the company "could have done more to stop some of this" and should "not just think of financials."

Shares of Facebook stock, which have shown a growing sensitivity to public controversies, declined 3.3 percent on Monday. The New Zealand Herald reported on Monday that some of the country's biggest brands were planning to pull their ads from Facebook and Google in an effort to pressure the companies to act.

Matt Rivitz, founder of Sleeping Giants, a social media campaign that pressures companies to distance themselves from hate speech, said that tech companies have an incentive to let extremist content remain on their platform as long as it's profitable for them.

He noted that public outcry led to tech companies taking action against ISIS.

"I think the platforms have obviously done an adequate job dealing with foreign terrorists and ISIS, and I think they might have a blind spot for white supremacists," Rivitz said. "Ultimately they have to have an even enforcement of their rules across their platform and they're just not doing it."

Facebook has taken some small steps to limit the ability of white extremist groups to use its platform to reach new people. In 2017, the social network removed several ad categories that provided targeting around terms like "Jew hater."

Also in 2017, YouTube removed ads from extremist content that included white supremacist videos after major brands pulled their marketing from the platform. The company has also said it has invested in new technology and human moderators to flag and remove hate speech.

Those efforts have been overshadowed by growing concern that the algorithms that determine what people are likely to see have become tilted toward promoting extremist content.

"Hate speech and content that promotes violence have no place on YouTube," a YouTube spokesperson said in an emailed statement. "Over the last few years we have heavily invested in human review teams and smart technology that helps us quickly detect, review, and remove this type of content."

Facebook did not return a request for comment.

Major tech platforms have become influential platforms in determining online norms, but white supremacist content has also coalesced in lesser-known parts of the internet, including fringe message boards and services that have attracted extremists as hate speech becomes less tolerated.

The tech companies have also been under pressure from conservatives over a growing perception on the right that tech platforms discriminate against them.

Those who run the Internet machines look at the humans and yell 'fix yourselves and your evil ways!' And the humans look at those who run the machines and yell back 'fix your machines and *their* evil ways!' https://t.co/6NBZ4c8QTg — Antonio García Martínez (@antoniogm) March 17, 2019

Those who differ with the critics point out that radicalism is a problem for society that's too large for the tech companies to fix on their own.

Alex Stamos, a former chief security officer at Facebook who is now an adjunct professor at Stanford University, said advanced artificial intelligence has room to improve but that the issue is bigger than the tech giants.

"This is an overall internet problem," he said.

Stamos noted that politicians who are calling for tech giants to take action have also criticized the companies for having too much control.

"I think there is also the issue of figuring out how we want to decide what information is completely banned from the responsible social networks," Stamos said. "Western politicians who, a week ago, were bemoaning the power of some of these companies are now unhappy that they don't have the same level of control as their Chinese competitors."

Carl Miller, research director of the Centre for the Analysis of Social Media at Demos, a think tank, tweeted that it is easy to criticize the companies but far harder to come up with effective legislation.

"Really easy: politicians to kick around the tech giants and say they need to do more," Miller wrote. "Far harder, far more important: reasonable law that balances the rights to freedom of speech with the harm it can causes, and levy achievable expectations on tech giant enforcement of it."