A new mechanism is currently being developed to deploy a vigilant flagging system to identify offensive content in Facebook Live Stream. Facebook has a highly vigilant mechanism in place for security and content aspects. The social media giant has always been watchful about keeping a user's "News Feed" free of content which is not appropriate.

Such scrutiny is not very difficult to be executed after the content has been properly posted. However, with advancing times and their own feature add-ons, Facebook has introduced a live streaming option to its users. So, how is Facebook going about the quality control of the livestream itself? What happens if something goes wrong in a livestream?

As of now, Facebook relied on a user based system which worked on the reports of the Facebook users who acted like vigilantes. Once a user reported offensive material, it was checked by Facebook's own employees against "community standards". However, a recent discussion held at the headquarters in Menlo Park, suggested that Facebook is testing an artificial intelligence system which can independently find and flag inappropriate content.

The new flagging protocol is "an algorithm that detects nudity, violence, or any of the things that are not according to our policies," said Joaquin Candela (Director of applied machine learning), according to Reuters.

This algorithm was initially developed in June to screen videos posted on the website which had violent or extremist content. However, now it shall also be applicable to Facebook Live Stream broadcasts. The mission is to remove amateur content from the Facebook. The AI system will just create alerts for such content being streamed. It won't act as a singular point of judging the appropriateness of the content.

According to Candela, the AI system is still being honed, and it will likely act as an alert, rather than a one-stop jury, judge and executioner of explicit streams. "You need to prioritize things in the right way so that a human [who] looks at it, an expert who understands our policies, [would also take] it down," Candela said.

What Exactly Is Appropriate?

There are still questions about where to draw the line. What is or is not appropriate? How can a computer based system decide what is right or wrong? Recently, Facebook faced some flak online because it removed a really important picture from the Vietnam War times. The content was removed by a human employee of Facebook.

Difficult, important broadcasts that might otherwise be flagged - like the streaming of violent encounters with law enforcement or the aftermath of a shooting - must be treated with careful consideration.

Can a machine adequately decide if one stream is offensive or not? Will the loss of human side to such exercise have serious implications? The good thing is that the AI system for Facebook Live Stream is just an alerting system- the final shots are called by humans. But are humans fool-proof, too? Are they not susceptible to subjective morals? The AI system is only in the testing phase currently. There is quite some time for its active deployment.

Do you think the Facebook Live Stream moderation will be successful? Keep watching this space for more updates.