By: Natalia Velez

June 15, 2015 —

Many people get their news from social media, which is one of the main reasons why the spread of misinformation within these channels is a risk. (Illustration by Michelle Henry) The growing popularity of social media raises all sorts of questions about online security. According to a recent Twitter SEC filing, approximately 8.5 percent of all users on Twitter are bots, or fake accounts used to produce automated posts. While some of these accounts have commercial purposes, others are influence bots used to generate opinions about a certain topic.

Concerned by the future potential of fake social media accounts, DARPA’s Social Media in Strategic Communication (SMISC) program held a four-week challenge this February, where several teams competed to identify a set of influence bots on Twitter.

A USC team composed of faculty and graduate students received first place for accuracy and second place for timing. Aram Galstyan, a research associate professor at the USC Viterbi Department of Computer Science and project leader at the USC Information Sciences Institute (ISI), led the victorious Trojans.

“Spamming behavior has evolved,” Galstyan said. “Current bots tend to be more human-like, and people have realized that they can be used for propagating certain kind of information, possibly influencing discussions on specific topics.”

Bots represent a threat to society, according to experts. In our increasingly digital age, more and more people get their news from social media, which is one of the main reasons why the spread of misinformation within these channels is a risk.

USC Viterbi Professor Aram Galstyan (Photo Credit: Will Taylor) Bots can also be used for political purposes, which some organizations have taken advantage of. For instance, it is well known that terrorist groups such as ISIS have used online social media as a way of reaching younger audiences and convincing them to join their cause. Examples of this persuasive behavior include the use of hashtags to focus group messaging and the creation of multiple accounts that post a high amount of tweets, pictures and links to people’s accounts.

Back in the cold war, magazines were the main vehicles for propaganda. Today, the propaganda war takes place online.

“People normally trust online content,” said Farshad Kooti, one of the Ph.D. candidates at USC Viterbi who worked with Galstyan. “Unfortunately, this introduces an opportunity to spread misinformation by using automated bots that are very hard to detect.”

For the DARPA competition, Galstyan’s team created a bot detection method that he said has proven 100 percent accurate. The overall process can be divided into three steps: initial bot detection, clustering and classification.

During initial bot detection, the team uses linguistics, behavior and inconsistencies as parameters to uncover a first set of bots. These cues include unusual grammar, number of tweets posted and stock pictures used in profile photos.

The second step focuses on identifying the accounts to which the first set of bots is linked. Studies suggest that most bot developers create clusters in which bots are connected to each other to increase retweets.

Finally, once a certain number of bots has been found, algorithms can be used to classify them.

While bot detection methods have become more accurate, bot creators are also enhancing their programming skills. The number of influence bots, as well as their degree of sophistication, will likely increase in the future, Galstyan said. We can expect new sets of more complex bots to engage in advertising activities and political influencing.

“The overall message of the DARPA challenge is that we need to watch out for what may be coming ahead,” he added. “This is a dynamically evolving problem, and the solutions that work today may become ineffective tomorrow. I believe that next step is to refine our methods and test them by analyzing specific campaigns.”

