The approach would supplement user-based content flagging with machine learning. The automatic system would generate a score for content based on the likelihood that it's objectionable, helping human moderators decide which material to cut. It'd look at the number of users objecting to content, for example, as well as the age of the account making a complaint (to discourage harassment and trolling). The AI-like code would study valid flags and learn to make more informed decisions about objectionable content.

This is just a patent application, and there's no guarantee that Facebook will either secure the patent or use it on its social network. A spokeswoman tells The Verge that the company regularly applies for patents it doesn't use, and that this content removal plan shouldn't be interpreted as a clue to its strategy.

However, the patent's existence shows that Facebook has been thinking about improved ways of pulling content for a while, and that the issue is really just coming to a head following the US election. Why hasn't it implemented this technology, though? There are a few reasons why it might have been hesitant: it won't help if someone actually believes the fake news, if their Facebook habits make it unlikely to show up, or if they just ignore it. It might also have been reluctant to do anything that would fuel accusations of bias. Nonetheless, the patent application doesn't help Facebook much -- it implies that the company simply chose not to implement a technical solution for fake news, even if there were perfectly valid reasons for holding off.