Today I took a break from work to watch the most recent YouTube video by my favorite Fortnite streamer, Tfue (yes, I like to watch people play video games). Anyone who watches Fortnite videos on YouTube knows that there are usually “V-Buck scams” in the comment section of every popular video. These are comments by scammers promising to hack some in-game currency for you so you can unlock items in the game without paying real money. They are typically phishing schemes or survey-based offer walls that earn the scammer small amounts of money. If the creator of the video doesn’t proactively delete these comments, they become pervasive.

Early V-Buck scams were pretty obviously sketchy to anyone who’s been on the internet for a while:

The thing is, the scammers didn’t have to try very hard. Fortnite’s main audience is kids and teens, and for those viewers, Fortnite content on YouTube might be some of the first video content that they’ve ever truly engaged with. They’re easy targets for someone looking to carry out a low-effort con.

Recently, the scams have become more intricate. The scam accounts, in a very meta move, now play on the fact that “V-Buck Scams” are a meme in the Fortnite community:

You might notice something else about this comment: It has tons of likes and replies. Fake likes from bot accounts give it an initial bump to make it the top comment, where it gets even more likes organically. Why would anyone like this comment, though? Well, look at the replies:

An entire army of shill accounts varying in name, comment style, and degree of agreeableness with the original comment show up to control the conversation. Since these replies were made quickly after the comment was posted, they show up first and drown out any replies from real users. You don’t see those until you click “Show more replies”:

These repliers noticed the scam, but they were also amazed at how well it was orchestrated. Frankly, I’m amazed, too. The social proof of the high number of likes and seemingly real replies lends immense credibility to the original comment.

If you saw that first batch of replies with no other context, would you be able to immediately say without a doubt that they were inauthentic? Do you think a young child or teenager would be able to step back and see the ruse, or would they capitulate and try to access the website that would lead them to Fortnite glory?

What if these types of fake comments weren’t on a video about Fortnite, but rather a political video? A news clip, a debate, an interview? What if, instead of promising free virtual currency, they were making false claims about a candidate, or promoting extremist or anti-democratic values?

If someone was willing to go through such effort to perpetuate an online gaming scam, how much effort would someone put in if they wanted to control the thoughts or opinions of an entire nation to sew divisiveness?

The 2018 US midterm election is just 9 days away. In the past year, Facebook and Twitter have responded with various measures to controversies about hate speech, fake accounts, political manipulation, echo chamber effects and more. It’s debatable how effective their measures have been, but I fear we’re missing the biggest offender in plain sight: YouTube.

YouTube’s reach is absolutely massive. As of mid-2018, it has 1.8 billion logged-in MAU (Monthly Active Users), compared to Facebook (2.2 billion) and Twitter (330 million). The 1.8 billion figure excludes hundreds of millions, possibly over a billion, monthly viewers who do not have accounts, making YouTube the most-used social media site in the world.

It’s also the preferred platform amongst US teens: According to Pew Research, 85% of teens say they use YouTube, compared to Instagram (72%), Facebook (51%) and Twitter (32%). YouTube is extremely popular with our most vulnerable, impressionable population.

I fear that YouTube comments are being used to control the conversation and manipulate voters of all ages on a massive scale, and the problem is not going away.

What is Google going to do about this, and how long will it take? Will we allow this to be swept under the rug as we all spend hours watching videos on a website where the comments have little to no chance of being provably authentic or organic? Do we trust ourselves to be able to know what’s real and what’s not, and when someone is trying to control the conversation?

Right now, it seems too easy for bad actors to get away with this. And right now is exactly when we need that not to be the case.