Keeping on top of negativity online is a difficult task; nearly one in five Americans have experienced severe online harassment. Google's Perspective AI aims to fix those problems, but it doesn't seem to be as smart as it needs to be.

As TNW reports, a group of researchers at Aalto University and the University of Padua discovered Google's artificial intelligence can easily be tricked and that state-of-the-art hate speech detection models only perform well when tested by the same type of data on which they were trained. Simple tricks to get around Google's AI include: inserting typos; adding spaces between words; or adding unrelated words to the original sentence.

Google's method of hate speech detection is achieved through assigning a toxicity score to a piece of text, defining it as rude, disrespectful, or unreasonable enough that you would be inclined to leave the conversation. However, the AI system is not intelligent enough to detect the context of expletives, and a simple change between "I love you" and "I fucking love you" sees a change in score from 0.02 to 0.77.

"Clearly 'toxicity,' as Perspective currently classifies it, is not assimilable to hate speech in any substantive (or legal) sense," the paper states. Similarly, typos or "leetspeak" (replacing common letters with numbers, so "GEEK" becomes "G33K," and so on), are also effective at tricking the AI while still retaining the original message's readability and emotional impact.

The word "love," which does not correlate with hate speech, also "broke all word-models, and significantly hindered character models," in some instances droppes a toxicity rating from 0.79 to 0.00.

With many social platforms—such as Facebook, Twitter, and YouTube—struggling to find the boundary between offensive and acceptable speech, an easily applicable artificial intelligence would clearly have its benefits.

Recently, Twitter came under fire for disabling conservative conspiracy theorist Alex Jones' account for a week when other platforms had removed his accounts and that of his website completely. Twitter claimed that Jones had not violated any of the platform's rules, but the company has since suspended @realalexjones and @infowars after a Senate Committee hearing.

Unfortunately with this news, and the recent examples of artificially intelligent chatbots such as Microsoft's Tay tweeting racist content, it seems AI will need to improve before we let it loose on the comments section.

Further Reading

Security Reviews