Facebook is still hosting video footage of the New Zealand mosque shootings six months after the tragedy despite the social network’s repeated pledges to clamp down on terrorist content.

NBC News has seen more than a dozen videos on Facebook that were taken from the livestream recorded by the accused gunman, Brenton Tarrant, 28, and broadcast on the social network as the attack was carried out at two mosques in Christchurch, New Zealand, on March 15. They show a first-person view of Tarrant using an assault weapon to mow down dozens of worshippers in two mosques. The attacks left 51 people dead.

Many of the clips, which include edited sections and screen recordings of the original footage, have been on the platform since the week of the incident. Some of the videos have been automatically covered by Facebook with a warning saying they feature “violent or graphic content,” but they have not been deleted.

“It’s literally the same footage,” said Eric Feinberg, an internet security researcher and founder of Gipec, a cyberintelligence company that discovered the videos. “The guy walks in with the music, the gun, the angle of the gun, the same shots. What are they doing with their technology?”

Feinberg said his software identified 300 and 400 versions of the video on Facebook and Instagram over the last six months.

“If with all of their great AI tools they can’t take down content like this that is consistent, what makes you think they can take down new violent content?” Feinberg said.

“When is Sheryl going to lean into this?” he said, referring to Sheryl Sandberg, Facebook’s chief operating officer.

Facebook announced in a blog post Tuesday that it would work with law enforcement to train its artificial intelligence systems to recognize videos of violent events as part of a wider crackdown on extremist content.

Facebook’s systems failed to detect the livestreamed video of the shootings. The company said that the incident “strongly influenced” the company’s updates to its policies and enforcement.

“The attack demonstrated the misuse of technology to spread radical expressions of hate, and highlighted where we needed to improve detection and enforcement against violent extremist content,” Facebook said.

Platforms increasingly face a battle with bad actors organizing on forums such as 8chan to circumvent their detection systems. Viewers of the mosque shootings recorded, repackaged and reposted the original video in a range of formats, creating a gruesome game of whack-a-mole. In the first 24 hours after the attack, Facebook blocked or removed 1.5 million versions of the video from the platform.

In a statement to NBC News, a Facebook spokeswoman said that the company has a database of more than 900 “visually unique versions of this video” that it can automatically detect and remove.

“When we identify isolated instances of differently edited versions of the video, we take it down and add it to our database to prevent future uploads of the same version being shared,” she said.

After NBC News provided links to two of the new videos discovered on the platform, Facebook took them down and added them to its database to ensure that future instances couldn’t be uploaded.

In a separate announcement posted Tuesday, the company outlined how it would develop an independent oversight board of 40 members to oversee the platform’s major content decisions.

Facebook announced the changes a day before senate lawmakers questioned the company, along with executives from Google and Twitter, about their efforts to curb the proliferation of extremism online.

“I would suggest even more needs to be done, and it needs to be better, and you have the resources and technological capability to do more and better,” Sen. Richard Blumenthal, D-Conn., said at the Commerce Committee hearing on Wednesday, according to the Washington Post.

The announcement precedes an event in New York on Monday, where tech leaders will meet with government representatives to discuss the "Christchurch Call," a commitment by governments and tech companies to eliminate violent extremist content online. It was initiated by New Zealand Prime Minister Jacinda Ardern and French President Emmanuel Macron in the wake of the terror attack and has attracted the support of technology companies including Amazon, Facebook, Google, Twitter and Microsoft.

The meeting, which takes place while Ardern is in New York for the United Nations General Assembly, aims to build momentum behind the effort and outline specific areas of focus, including improving automated take-down processes and collaboration between the tech firms.