Facebook said on Wednesday night that its artificial intelligence systems failed to automatically detect the New Zealand mosque shooting video.

A senior executive at the social media giant responded in a blog post to criticism that it didn’t act quickly enough to take down the gunman’s livestream video of his attack in Christchurch that left 50 people dead, allowing it to spread rapidly online.

Facebook’s vice president of integrity, Guy Rosen, said “this particular video did not trigger our automatic detection systems.”

"AI has made massive progress over the years and in many areas, which has enabled us to proactively detect the vast majority of the content we remove," Rosen said. "But it’s not perfect."

One reason is because artificial intelligence systems are trained with large volumes of similar content, but in this case there was not enough because such attacks are rare.

Rosen said another challenge is in getting artificial intelligence to tell the difference between this and “visually similar, innocuous content,” such as live-streamed video games.