Anyone who played Counter-Strike: Global Offensive (CS: GO) certainly noticed toxic players as well. That’s why artificial intelligence takes care of it and it’s hard.

What is this for an A.I.? Artificial intelligence is called Minerva and was built by the online gaming platform FACEIT.

It has been in operation since August 2019 and is already showing strong results. More than 7 million messages have been flagged as toxic and over 100,000 players have been warned or banned.

Minerva is tough and punishes repeat offenders harder

This is how Minerva works: During the game, artificial intelligence tracks the chat history of the player. If she detects a violation, she will report it to the affected player immediately after the game.

First, the player is warned and given a warning. If he is a repeat offender, he will be banned and will no longer be able to play.

How successful is the program? According to FACEIT, Minerva is extremely successful. Therefore, the program has already alerted 90,000 players and has pronounced 20,000 bans. That’s a fair amount, as. CS: GO is one of to most toxic online games.

As a result, the number of toxic news dropped 20% between August and September. 8% fewer unique players sending toxic messages were counted.

This chart shows FACEIT Minerva’s success.

This is what FACEIT says: The artificial intelligence developer praises Minerva, as she is just starting, Minerva could expand and improve further.

So they say in her blog: “In the coming weeks, we will announce new systems that will help Minerva in its education.”

Therefore, artificial intelligence remains exciting, unless it turns rogue like that Microsoft ai bot 🙂