There is a narrative emerging around Facebook that implies the social-networking giant cannot prevent the spread of fake news, Russian political messages, and ads targeting hate groups on its platform. Like Dr. Frankenstein, Facebook created a monster! The algorithms have already won! Facebook’s sheer scale means it could never successfully police the gazillion pieces of content its 2 billion users create and share every second. We are doomed to live in a dystopian post-truth world of propaganda, dark ads, and artificial intelligence.

This echoes Facebook’s own defense against the rising backlash it faces. If Facebook is beholden to algorithms, it cannot be held fully responsible for the activity on its network. Announcing new tools last week to police things like Russia’s purchase of political ads, CEO Mark Zuckerberg said Facebook could do better, but tried to set expectations: “I’m not going sit here and tell you that we’re going to catch all bad content in our system.” He framed his reasoning in terms of freedom of speech: “We don’t check what people say before they say it, and frankly, I don’t think society should want us to.”

There’s a small problem with this argument: Facebook has repeatedly shown it can police content on its platform, particularly when doing so affects its $27 billion business. “We’re just a platform” is a convenient way to avoid taking full responsibility for an increasingly serious set of problems.

In 2011, when Facebook decided that games from companies such as Zynga were disrupting the way people used Facebook, it limited how many messages gaming companies could send to Facebook’s users. The dominance of gaming---and Zynga---on Facebook immediately declined.

In 2012, when independent content apps like SocialCam and Viddy began annoying users, Facebook began demoting content related to them. Their usage dropped, prompting each company to sell and ultimately shut down.

In 2013, when Facebook decided that the curiousity-gap headlines and clickbait articles offered by viral websites like Upworthy (“the fastest growing media site of all time”) and ViralNova were wearing thin, it changed the algorithm for News Feed, the main river of content for Facebook users. Traffic to ViralNova and Upworthy dropped dramatically.

On matters of taste, Facebook has long banned nudity from its platform, “because some audiences within our global community may be sensitive to this type of content.”

On the advertising side, the company has a clearly defined system to ensure that alcohol ads comply with various national regulations, as former product manager Antonio Garcia Martinez recently wrote for WIRED.

It won’t be as easy for Facebook to ferret out sophisticated “bad content” as it was to demote FarmVille notifications. For example, the Russian campaign purchased ads that both supported and criticized the Black Lives Matter movement, according to the Washington Post. Martinez, the former product manager, describes the problem as “playing whack-a-mole.”

Facebook did not respond to a request for comment. But the company has shown it can tackle complicated political situations. The company has complied with requests from leaders of Vietnam and other countries to censor content critical of those governments. Facebook reportedly created a censorship tool that suppresses posts for users in certain geographies as a way to potentially work with the Chinese government. The company has reportedly used the same technology it uses to identify copyrighted videos to identify and remove ISIS recruitment material.

At a 2015 conference, Facebook head of product Chris Cox repeatedly denied that changes to Facebook’s content algorithm were subjective: Facebook does not have a “lever” or “dial” which it uses to control what content its users see in their News Feeds, he argued.