In dealing with election-related disinformation, YouTube faces a formidable task. More than 500 hours of video a minute is uploaded to the site. The company has also grappled with concerns that its algorithms may push people toward radical and extremist views by showing them more of that type of content.

In its blog post on Monday, YouTube said it would ban videos that gave users the wrong voting date or those that spread false information about participating in the census. It said it would also remove videos that spread lies about a political candidate’s citizenship status or eligibility for public office. One example of a serious risk could be a video that was technically manipulated to make it appear that a government official was dead, YouTube said.

The company added that it would terminate YouTube channels that tried to impersonate another person or channel, conceal their country of origin, or hide an association with the government. Likewise, videos that boosted the number of views, likes, comments and other metrics with the help of automated systems would be taken down.

YouTube is likely to face questions about whether it applies these policies consistently as the election cycle ramps up. Like Facebook and Twitter, YouTube faces the challenge that there is often no “one size fits all” method of determining what amounts to a political statement and what kind of speech crosses the line into public deception.

Graham Brookie, the director of the Atlantic Council’s Digital Forensic Research Lab, said that while the policy gave “more flexibility” to respond to disinformation, the onus would be on YouTube for how it chose to respond, “especially in defining the authoritative voices YouTube plans to upgrade or the thresholds for removal of manipulated videos like deepfakes.”