AI, Cyberbullying, and Social Media

How social media companies are using AI to help combat cyberbullying. How we can all help.

@isaiahrustad unsplash.com

In 2017, a 12 years old New Jersey girl took her own life after months of cyberbullying by her classmates at Copeland Middle School in Rockaway. She’s not the only one. According to bullyingstatistics.org, Over half of adolescents and teens have been bullied online, and about the same number have engaged in cyberbullying. For those teenagers who are bullied daily at school or online, the repercussions can be devastating. Teens are usually impressionable on their quest to form an identity. Enduring bullying on a daily basis can feel like being mentally tortured in a CIA torture chamber. It can lead to anxiety, depression, and even suicide.

Cyberbullying is a problem that’s affecting the entire internet.

According to the Pew Research Center, in the US. 75% of adults have seen cyberbullying occurring around them. 40% of these adults have personally experienced some form of online harassment.

The Larger Societal Problems of Cyberbullying

Cyberbullying is not just a virus that infects people’s experiences online. In many cases, Cyberbullying leads to real life stalking and harassment. According to a study done by Florida Atlantic University on high school students, 83% of those who have been cyberbullied have also been bullied in person. School often refrain from reporting the incidents to school districts. Barring physical harm done, the police are often helpless in situations to help to arrest and prosecute the bully.

The bully often suffer as much as the victim.

The victims are prone to mental illnesses such as anxiety and depression. But, the bullies are also prone to antisocial behaviors that lead to personality disorders. With each bullying incident with the bully suffering no negative repercussions, the bully will eventually feel a sense of “entitlement” to further his or her bullying ways. It is this kind of “entitlement” that leads to more elaborate antisocial behaviors both online and in real life. Some of the mental illnesses related to antisocial behaviors are Narcissistic Personality Disorder, Borderline Personality Disorder, etc.. In recent years, the increase of mental illnesses in our society is not surprising.

Cyberbullying also prevents people from trusting the internet and technology as a whole.

The internet has greatly benefited our society. Technological advancement allowed us to work productively for decades now. Online trolls who are negatively affecting our online experience, also take away our ability to do our work productively online. They are marketing technology as an ominous tool for the next generation of technology users. They are preventing people from integrating technology into their lives in a positive manner.

Cyberbullying is difficult to prevent and stop

Cyberbullying is different than real life bullying. These characteristics of cyberbullying make it difficult to prevent and stop the incidents.

Cyberbullying is difficult to spot — Often, the bully will hide behind a facade of humor to subtly wear down the victim.

— Often, the bully will hide behind a facade of humor to subtly wear down the victim. Blocking people will just augment your reality — In social media groups, it may not be possible to block the person who is bullying you. Particularly, in the case of a student, it may not be possible to block a classmate in an educational forum dedicated to serving the needs of students in a class.

— In social media groups, it may not be possible to block the person who is bullying you. Particularly, in the case of a student, it may not be possible to block a classmate in an educational forum dedicated to serving the needs of students in a class. The victims of cyberbullying are difficult to spot — Often, the victims have questions in their mind about the bullying incident. With the bully hiding behind the disguise of humor and a fake name, the victims often feel like they are the ones to blame or they are the ones who are making something out of nothing.

Cyberbullying presents a unique problem for Artificial Intelligence to solve

The difficulty of recognizing cyberbullying online presents a unique challenge to social media companies who are trying to protect their own social media communities online. It is often this kind of finding-needles-in-haystack problems, that’s ideal for machine learning to solve. AI can identify language nuances and classify speech efficiently on large quantities data where the humans can not. Algorithms can also adapt and improve on the accuracy of identifying cyberbullies as they learn more about the bully’s online activities.

In 2016, Identity Guard partnered with Megan Meier Foundation to use IBM Watson technologies that enable natural language process (NLP) and natural language classifiers (NLC) to identify instances of cyberbullying or self-harm. Parents receive alerts that are generated in instances of recognition. Then, parents can take actions to protect their children’s experiences online.

In June 2016, Facebook introduced DeepText as “a deep learning-based text understanding engine that can understand with near-human accuracy the textual content of several thousand posts per second. Soon, Instagram started to use DeepText to eliminate spam on its platform. Recently, Instagram started to use technology to combat online trolls and harassment. Facebook has a content moderating team that employs human in conjunction with using AI technologies to identify harassment online, monitor and remove fake accounts.

Twitter also uses AI technology to spot spam and recognize negative interactions. Twitter helps users to curate their own experiences by presenting the best, positive experiences on the user’s feeds. If you block someone on Twitter, chances are similar accounts from the same person will stop showing up on your Twitter feed.

Google and Jigsaw developed the AI moderating tool, Perspective, to score comments based on similarities with other comments categorized as “toxic” by human reviewers. Youtube uses this tool to combat online harassment on its platform.

Artificial Intelligence and Machine Learning are not perfect

The best usage of AI and machine learning to identify online trolls and incidents of harassment seems to be in conjunction with human intervention. Whether it’s human data that the AI System uses as training data, or content reviewers to validate the accuracy of the results provided by the AI System, some human intervention helps the AI System to produce more accurate results. On Instagram, image classification along with text classification present more complex cases of identifying online harassment. The usages of neutral texts with odd contextual meanings also mean that certain types of texts will not be spotted by the AI System.

People have to step up to help AI Systems to be more effective as well as helping to protect our online communities.

Our technologies are as effective as the people who use them. With Generation Z being even more technology savvy than their parents, we are spending more time in our online communities. As people, we have to be more explicit about our online conduct. In real life, we have codes of conduct that are often reinforced by the communities that we live in. In our cyber life, we need to have that kind of code of conduct enforced by members of our online community. Hiding behind our computer screens and feeling “entitled” to say as we feel and act as we feel only leads to anarchy in our online forums. Instead, we should be held accountable to act as we do in real life.

Actions we can all take to protect our online communities: