Facebook, Twitter, and other social media platforms have faced criticism in the days following the attack on two mosques in New Zealand over failing to stop the rapid spread of the attack footage. The shooter live-streamed himself killing innocent worshipers during Friday prayers on Facebook and copies of the video continue to reappear online.

Sputnik has discussed if social media should play a bigger role in fighting and picking up hate speech on their platforms with Laura Bliss, an expert on social media law.

Sputnik: Why do you think Twitter failed to respond promptly to the spreading of the extremist video, is this a technical problem or a management oversight problem in your opinion?

I think it's a bit of both really. But how technology is changing now, I just think we're all struggling to keep up and I think even social media companies are struggling now. We can't keep pace with the content that is online. And one of the major problems is, obviously, technology. We've got AI technology, but AI technology is good in some aspects, but it's not very good when it comes to speech and it can't necessarily pick up hate or extremist content quickly.

So that's where we go back on moderation, but social media companies have millions, sometimes billions of users but they're moderators are only in the thousands. So Facebook, we know, for example, they've got 2.27 billion monthly users. At the end of last year they were anticipating to have only 20,000 moderators, how can you keep up with extremist behaviour or any type of behaviour online if you just don't have the technology and the people to help you with it.

READ MORE: Christchurch Attacker May Have Had Support — New Zealand Police

Sputnik: The issue of Twitter ignoring the accounts of those who spread extremist or hate speech ideology has been around for a number of years. Why in your opinion has nothing effectively changed since the social network was first accused of negligence when it comes to these issues?

Laura Bliss: The main issue at the moment is the lack of legal involvement. So we've all got a right to free speech and free speech is important to maintain a democracy. I'm an advocate of free speech, but I think we forget that actually free speech if you have absolute free speech it means that you're limiting other people's free speech. And what we're doing now is it seems to be tilting a little bit, in my opinion, a little bit too much in the direction of free speech, which means that extremist behaviour online is allowed to maintain.

Because we're not necessarily being able to distinguish between what is extremist behaviour and political ideology. And the way forward is going to essentially be some form of legal enforcement across the globe, and to try and force social media companies to act more appropriately in removing such content as we've seen with New Zealand, the devastating effects that social media can have. You've got videos being shared across social media platforms, more than actual event itself, and that's devastating for everyone who has been involved, for the families. And yet for social media companies there will be very little repercussions for them for what happened in recent days.

Sputnik: Do you think this is an issue of also poor law enforcement in the United States?

I think it's poor law enforcement across the globe. Obviously, they're all based in the United States, but I think this is now more of a global issue than it is a domestic issue. Obviously, we don't want to reduce free speech, but there needs to be some form of clarity of how we tackle what's going on on social media globally. It's a case of state sitting down together and saying: "right, what do we need to do in order to maintain what's being spread online?" Because crime used to be very much domestic and a domestic issue essentially, now it's a global issue because someone can be sat in one country and submit content in another. It's about everyone working together to try and resolve the issues that are still ongoing.

Sputnik: To what extent is Twitter following the media hype when it comes to banning prominent right-wing figures, but it allows extremist ideology to be shared regardless of violent content of videos and messages…

Laura Bliss: Yes, I think it's hard in some cases to distinguish between what is extremist content and political ideology. And I think that's where we're starting to fall short. It's about research, trying to establish different ways to try and overcome this issue that we're having. I think the media does have some hype in it, but if we get behind certain content then social media are more likely to remove it.

READ MORE: New Zealand Company Under Fire Over 'Symbolic' Link to Mosque Shooter — Report

We've seen that before with hate speech, for example, when that's illustrated in the media it's more likely that a social media company will remove it. But one of the issues is always going to be language. The language changes over time and it's hard to keep pace with what language can be associated with different ideologies. Again we've got AI technology, but it's been estimated that it will be another 5 to 10 years before AI technology would be able to keep pace with the language that's ongoing online.

Sputnik: To what extent should the social networks be held responsible for spreading extremist materials and how can this be stopped?

Laura Bliss: They should without a doubt start to be held responsible for the content that goes on their sites. So for the moment they're seen as simply hosts rather than publishers which means that they have an air of protection. But they need to be doing more. I think we've seen over the recent years how much dominance these companies have within society.

The way forward would be legal enforcement or one of the ways I think would be a good approach would be a universal code of conduct for social media companies, where it's all agreed, everyone has the same universal code of conduct; doesn't mean they don't have their own terms of service agreement. We can sit down and agree to the fundamental codes that needs to be in place, the fundamental principles social media companies need to be upholding, and then if these codes have been broken then in those cases ensuring fines are put in place and if it can be found that something illegal has happened making sure that we pursue with that.

The other way forward may be going down the civil route where we start imposing a duty of care where there is a clear case of negligence and, therefore, being able to hold those social media companies to account. So behaviour that is still ongoing on their sites, they are the owners of that, they make a lot of money yearly and they need to be taking more responsibility for what actually goes on, rather than coming out with this argument that we're merely hosts, not publishers.

The views and opinions expressed by the speaker do not necessarily reflect those of Sputnik.

The views and opinions expressed in the article do not necessarily reflect those of Sputnik.