tech2 News Staff

Conversations on the internet are known for their extreme toxicity. Seventy two percent of Americans have witnessed online harassment, and more than fifty percent have personally experienced it, according to a survey conducted by Data and Society Research Institute. One third of users self-censor their comments over fear of retribution. 140 million users have been affected by malevolent online conversations in the US alone, with the number being much higher all around the world.

It is not just the readers who are affected, even news organisations and content publishers who want to create engaging online platforms and areas for discussion are unable to do so. There are often too many comments to moderate individually, as a result of which, only a fraction of the comments show up if they are moderated by humans. Some publishers have taken the step of disabling comments altogether, to sidestep the problems created by unproductive online discussions.

Google and Jigsaw are trying to address the problem with a developing machine learning tool known as Perspective. Perspective uses machine learning models to identify the probability of a comment being toxic, by comparing the similarity of a comment to hundreds of thousands of human labelled comments. Publishers can use the API in a number of ways on their platforms. Comments can be flagged as toxic to await approval or moderation by humans, the user can be directly intimated of the toxic nature of their comments at the time of writing itself, or users can sort all comments in an article by toxicity to discover high quality comments that may be buried under mountains of malignant text.

Perspective has already been tested by Google with an early partner, the New York Times. A dedicated staff of humans monitor over 11,000 comments every day, and only around ten percent of the articles posted have comments. Google has worked with the staff to allow them to sort through the thousands of comments more quickly, and plans to continue to take steps to allow the human moderators to police an increasing number of daily comments. Interested publishers and developers can request access to the API.

Perspective is still in its early stages, and stands to continually improve the more it is used. For now Perspective only identifies if a comment is likely to be toxic, but in the future, Google plans to add other abilities, allowing Perspective to spot off topic comments, or unsubstantial text. Google is also working on new models for languages other than English.