The job of watching and removing violent or pornographic content from these live apps, as well as video sites like YouTube, has primarily been a human undertaking. Workers tasked with content moderation review hours of video flagged as inappropriate by users, taking down anything that violates guidelines. It’s a grueling and at times horrific job, and the sheer amount of content makes it a challenge for manpower alone. Now, artificial intelligence is poised to help with this task.

Software that can intelligently watch video is currently being developed by several companies, including both Twitter and Facebook, for use on their live-stream services, Periscope and Facebook Live Video.

AD

AD

Companies, such as Clarifai and Dextro, have made huge gains in developing this kind of sophisticated software as well. Dextro, a New York-based start-up, currently uses video recognition AI to easily search through content on live-stream apps. While they don’t monitor for inappropriate content right now, instead scanning for video that might be interesting and relevant to a company’s brand, the technology or similar software could easily be used to weed out porn or violence.

Co-founder David Luan said the challenge lies in creating software that can interpret not just still images but moving images, audio and other “signifiers” that demonstrate what is happening in the video. “It’s like trying to recreate a human’s experience of watching these videos,” Luan said.

Companies traditionally relied on tags to indicate the nature of a video’s content to a computer system, but Luan said reducing the meaning of a video to a couple of key words does not accurately capture its full scope. “What is challenging about these videos is that they are much more complex than just a single picture. Even though it is a series of images all one after the other, there’s the motion element, the audio, so much of that gets thrown away if you just analyze image after image,” he said. “So you really need to treat it like a whole piece and analyze that,” which, he said, relying on tags cannot achieve.

AD

AD

Dextro’s software, on the other hand, can recognize objects and signifiers in a frame without human intervention, such as a gun in a potentially violent video. And the speed of AI’s recognition gives it a huge leg-up on human monitors. Luan’s software can analyze a video within 300 milliseconds of posting.

Cortex, Twitter’s division focused on Periscope-monitoring AI, has been working on software that can watch and recommend live video since its launch in July 2015. “Periscope has been working with Twitter’s Cortex team to experiment with ways to categorize and identify content in live broadcasts,” a Twitter spokesman said in a statement. “The team is focused on pairing that advanced technology with an editorial approach to provide a seamless discovery experience on Periscope.” Cortex could not confirm when they would be rolling out their product.

Facebook confirmed that the company does not currently use AI to filter out pornographic or violent videos, and declined to comment on whether AI software was being developed for the future.

AD

AD

Human content moderation has traditionally been outsourced to countries, such as the Philippines. Even in a future where AI does most of the work, Luan sees a role for human intervention. “Humans help to retrain the algorithm and help it get better over time,” he said of his company’s video-watching AI.

But in the aftermath of widely viewed videos and live-streams of the deaths of Philando Castile and Alton Sterling at the hands of the police, the question of when censorship is ethical or appropriate poses a challenge for tech developers in the role of content moderator. How would a machine handle those videos, if it eventually takes over as the prime moderator?