October 28, 2019 5 min read

Opinions expressed by Entrepreneur contributors are their own.

For most of us, online interactions slowly became an accepted norm. For children who were born into it, online interactions are the norm. While social networks, messaging services and gaming platforms all present their own set of perks, there is a dark side we cannot keep ignoring: online toxicity.

AI has managed to disrupt every industry under the sun due to its efficiency and scalability, but can it help shield children from online toxicity before it starts? According to Zohar Levkovitz, absolutely. Levkovitz, a former Entrepreneur of the Year in California, is known for building and selling Amobee to Singtel for over $350M, in addition to investing in hundreds of startups, contributing to charitable causes and starring in the Israeli edition of Shark Tank. But now he is pursuing a mission to save kids by launching L1ght, an algorithm-driven innovator designed to solve the crisis of online toxicity and how it impacts children.

I sat down with him to learn more about the issue of online toxicity and what entrepreneurs are doing to solve it, and here's what I took away from our conversation.

Related: This 19-Year-Old Aims to Stop Bullying With an Anonymous Smartphone App

1. Technology encouraged online toxicity to spread.

Online toxicity refers to behaviors including cyber bullying, sexism, shaming, harassment, hate speech and predatory behaviors. Unfortunately, cases of children experiencing depression or resorting to self-harm have been on the rise in recent years due to repeated instances of online toxicity. Before our kids used to spend so much time online, toxicity wasn’t as sophisticated in the past, and didn’t appear so many different forms and shapes. Today, it’s easy to remain anonymous, disguise yourself as someone else or join social communities with like-minded bullies.

2. It’s no less than a worldwide epidemic.

As someone who raised his young kids in California, where teenage self-harm rates are at an all-time high, Levkovitz started being aware of the different dangers children face in the online world. The more he dove into the numbers -- such as 400 victims for the average child attacker, or 74 percent of gamers personally encountering toxicity -- the more horrifying the big picture turned out to be.

He then met with cyber entrepreneur Ron Porat, who had just sold his last company and heard one of his kids was approached online by a predator. Luckily, nothing bad happened, but they started thinking about a solution to the problem.

3. It's problem that big companies can’t, or won’t, treat fully.

Game publishers, console makers and government agencies are clearly aware of these issues and working to confront them, in a variety of ways. From updated terms and conditions and strict law enforcement to a hiring spree of moderators, actions have been taken, but the fact is the problem still grows every day.

As for the major social networks and apps, solving the problem would require a shift in focus to building a sophisticated technology, which is just not in their core business and roadmap. They have been designed to scale rapidly, which doesn’t go hand-in-hand with offering child safety and preventing millions of incidents from possibly happening. Existing solutions, such as human moderators, don’t cut it. (See what happened on the campus that employs most of Facebook’s content moderators.) Dictionary-based solutions also don’t help our children. After all, if predators secretly start to use a new slang word, all the previous solutions lose effectiveness.

Related: These Entrepreneurs Are Taking on Bias in Artifical Intelligence

4. Hope can be found in artificial intelligence.

Perhaps the best way to solve a massive problem is scaling to its size. Levkovitz and his team researched and trained algorithms in order to think like kids -- and like their attackers. Using AI and deep learning, they analyze communications, whether sent by text, video, audio and image, and predict if the conversation is about to become toxic. They track the context of the entire conversation and are aware of trend changes, so attackers can’t hide behind fake profiles or hidden slang words. Using this technology, they have helped remove 130,000 pedophiles from WhatsApp groups, and have helped remove inappropriate child abuse images from Microsoft’s Bing engine.

There is clearly much more that needs to be done to eradicate online toxicity and help keep children safe from online predators. With new social-media networks, messaging platforms and games regularly popping up, the potential risks exponentially increase. It remains to be seen if entrepreneurs and new technologies can work to put an end to this epidemic.