Soatok’s Proposal

I’d like to propose a solution to this problem that doesn’t rely on novel cryptography or machine learning techniques to succeed.

The core reason why the prospect of defeating the attackers’ counter-countermeasures seems so bleak is easy to understand if you’re familiar with the six dumbest ideas in computer security.

Specifically: Enumerating Badness.

If your input domain is infinite, you’ll end up doing one of two things:

Creating a blacklist of size infinity, OR Missing some badness

So I’d like to propose the antithesis: Let’s enumerate goodness instead.

First, take all content categorized as (or appears to be) news, and default it as unvetted. Put a big scary banner on all such unvetted material, warning users that they may be watching fake news or propaganda.

Next, introduce a team of volunteers who have permission to do two things:

Attest that certain news-related videos are trustworthy and accurate (or affirm that they are bullshit). Invite other users to become a volunteer.

Some important implementation details:

Outside the initial seed users (which, for example, could be selected from college professors and graduate students at public universities), every volunteer must be invited by another user. Every attestation must be linked back to the volunteer vetter’s account for transparency and accountability. If any users are found to be abusing their vetting privilege, they will be removed. If several abusers were invited by the same user, YouTube may also remove the ability for the common parent in the nested tree of volunteers to invite future users. All of their peers should similarly be investigated. In order to move from unvetted to vetted status, several positive attestations must be provided from users whose distance from a common parent is either undefined (i.e. different seed users, so different trees entirely) or greater than 2. Any edge cases should be escalated past the volunteer vetters to the usual YouTube moderation staff.

Optionally, you can partition vetters into areas of expertise, but that gets really complicated fast.

Will anyone volunteer?

Compare the success of Encyclopedia Brittanica with that of Wikipedia. When was the last time any of us looked an article up in Encyclopedia Brittanica?

Wikipedia’s success came from hordes of unpaid volunteers working without financial incentive.

Can we incentivize users to volunteer?

I would strongly advise against monetary incentives.

YouTube can gamify this by creating a score system for vetters who successfully attesting content as trustworthy (with a mildly stronger incentive for identifying unflagged content), with points only awarded upon achieving (and maintaining) quorum and therefore vetted status.

Gamification can include leaderboards, social badges, etc. If StackOverflow is a reasonable precedent, it may even lead to greater career success for users who are proficient and picking out bullshit from the content farms.

What about anything that slips through the cracks?

The default state is unvetted. If the content is sneaky enough to evade volunteer detection, users will be instructed to distrust it until it’s been vetted. (Maybe add a “please vet me” button?)

If the volume of content eclipses the available pool of volunteer vetters, most of it will remain unvetted and be marked as potentially fake news.

Are University professors the best seed to select?

I don’t know of a better one that would lead to an initial pool of users capable of fact-checking. Feel free to replace that example with something that makes more sense.

It might not matter that much. The important thing is appending “who invited who?” into a tree so that common sources of abuse/misuse can be identified and cut out. This is a proactive defense against coordinated semi-authentic behavior to evade detection with malicious fake vetting.