Getty Images

Russia's disinformation campaign during the 2016 US presidential election rocked social media companies like Facebook and Twitter to their core. Now, kids attending the Defcon conference are learning how to create their own bot army.

But the organizers behind the r00tz Asylum, Defcon's kid-friendly event, say there's no cause for alarm.

The goal isn't to launch a new flurry of hoaxes and chaos on social media for the 2020 US presidential election. It's to teach the next generation of voters about how easily fraud erupts social media and to break down the tools foreign actors use to spread disinformation, r00tz co-founder Nico Sell said.

"The kids are now really interested and want a way to engage," Sell said at this week's Defcon hacking conference in Las Vegas. "They hear a lot about fake news out there -- these are things that we want to show them, the exact mechanics of how things really work."

This is the second year that the r00tz Asylum's challenge will be focused on politics, after kid hackers at Defcon 2018 learned how to hack into websites simulating state election results.

This year's challenge is split up into two parts. First, the Voting Village will be teaching kids how to hack simulated campaign finance websites and alter documents. Then the Artificial Intelligence Village will be working with the kids to create a disinformation campaign to spread those forged documents on a simulated social network.

"This is entirely closed course. Nothing, including the bots that the kids write, will be touching anything on the open internet," Win Suen, the AI Village's challenge leader, said.

We mostly want to teach them how easy it is to set up disinformation bots on Twitter and Facebook. Sven Cattell, head of the AI Village

As the 2020 US presidential election draws near, lawmakers have warned about upcoming disinformation efforts. At a Congressional hearing in July, former special counsel Robert Mueller told lawmakers that Russia was meddling in the 2020 campaign "as we sit here."

"What we're teaching kids to do is surely going to be used by Russians and numerous other countries. This is not rocket science," Sell said.

It won't be on the same scale as 2016 Russia's disinformation campaign, which was a $35 million effort that saw 13 dedicated state actors charged by the US Justice Department. The kids attending the workshop will learn how to create their own bots, and code them in a way that will dodge spam filter algorithms, said Sven Cattell, head of the AI Village.

The algorithms will be a toned-down version of what social networks like Facebook and Twitter use, because it's supposed to be a learning experience, he noted.

"We mostly want to teach them how easy it is to set up disinformation bots on Twitter and Facebook, and show them how it might function online," Cattell said.

Once the kids know how simple it is to create bots, they'll be more skeptical of online disinformation campaigns on social media, he says.

Disinformation campaigns frequently use bots to boost fake engagement with hoaxes, making it appear as if there is genuine support for fraudulent content.

The challenge

A key part of the r00tz Asylum is making sure that the kids don't take what they learn and then abuse that power.

With each session, the organizers go through the code of ethics and explain to the participants that these are not techniques they should use maliciously. They point out the consequences of hacking, which include jail time, and the village's rules are posted all around the room, Sell said.

Organizers will provide the kids with a coding template and walk them through how to create bots for a simulated social media network composed of real tweets using hashtags related to pets. The goal is to build bots that tweet out a hashtag in attempts to trick the network's "Recommended" algorithm and overtake the pets hashtag with disinformation.

This is a real tactic that can be used in disinformation campaigns. Facebook's "Suggested" algorithm has been tricked into recommending hoaxes to people on multiple occasions.

As the challenge goes on, the AI Village's organizers will add more difficult spam filters, and the kids need to figure out ways around them. But the bots created for the challenge likely wouldn't work on social networks today, Cattell says.

"It might have worked in 2007," he said. "Spam detection is more advanced now, it probably would stop all of them."

Each team will be allowed to have three bots. Participants will then be able to see the results changing in real-time on a large screen, as if it were a real disinformation campaign.

While the challenge is a scaled-down version of how disinformation spreads, the organizers believe the lessons are just as important.

"What we're doing is somewhat analogous to kiddie go-karts," Suen said. "Everything is done on a closed course, with extra safety features and adult supervision. The course is also a lot easier and more controlled than anything a driver encounters in the real world, but hopefully kids have fun and learn something too."