Got all that? Great.

I don’t know why you would build a whole public ads library, require a certain subset of posts to be labeled as ads, and then exempt those ads from your ads library. I also don’t know why you would invite a fresh nine months’ worth of news cycles over unlabeled viral political ads from influencers, false political ads from influencers that are not fact-checked due to the presence of candidate political speech, and so on. The situation would seem to pit the company’s integrity teams against their advertising teams, with the advertising teams winning all the most important battles.

But set all that aside for a moment. Who should be setting all these policies in the first place? Should it be Facebook, or should it be someone else? Someone like, oh, say, the government?

Well, that’s what Facebook says it wants. Mark Zuckerberg said as much during a trip over the weekend to Europe.

“Even if I’m not going to agree with every regulation in the near term, I do think it’s going to be the kind of thing that builds trust and better governance of the internet, and will benefit everyone, including us, over the long term,” Zuckerberg said at the Munich Security Conference on Saturday.

He followed up with an op-ed in the Financial Times on Sunday, asserting that Facebook needs “more oversight and accountability.”

Facebook also released a white paper (PDF) outlining the approach it would like to see regulators take to creating legal standards for content moderation. The approach it would like to see, you may not be surprised to learn, is one that largely follows the avenues Facebook has already taken. That includes: requiring public reporting on policy enforcement actions; reducing the visibility of content that violates standards; and blocking attempts to regulate speech based on the content of that speech. (The paper does not address how countries might regulate political ads, though Zuckerberg’s statement that posts on Facebook ought to be regulated like something in between a telecom company and a newspaper suggests the answer is “very lightly.”

European regulators, for their part, dismissed Facebook’s white paper so quickly that you wondered if they had even bothered to read it. Here’s Valentina Pop in the Wall Street Journal

Thierry Breton, the EU commissioner for internal market and services, who met with Mr. Zuckerberg on Monday, told reporters afterward that the Facebook white paper “is too low in terms of responsibility. There are interesting things, but it’s not enough.”

He said the commission will decide by the end of the year what kind of liability to impose on online platforms. “I told him the comparison with telecoms is not relevant. A message [on Facebook] reaches hundreds of millions. On telcos you have one-on-one communications.”

Even if you find Facebook’s suggested regulations self-serving, they do highlight important trade-offs that states will have to make as they consider new laws. Consider, for example, the increasingly popular idea of legally requiring platforms to remove bad posts within 24 hours. Facebook points out, rightly I think, that this creates the wrong incentives:

A requirement that companies “remove all hate speech within 24 hours of receiving a report from a user or government” may incentivize platforms to cease any proactive searches for such content, and to instead use those resources to more quickly review user and government reports on a firstin-first-out basis. In terms of preventing harm, this shift would have serious costs. […] Companies focused on average speed of assessment would end up prioritizing review of posts unlikely to violate or unlikely to reach many viewers, simply because those posts are closer to the 24-hour deadline, even while other posts are going viral and reaching millions.

Here Facebook’s preferred solution — requiring companies to take down bad posts that hit a certain threshold of virality — strikes me as more likely to create a positive effect.

Everyone who posts on the internet, and lives in the world that the internet creates, has a rooting interest in both platforms and nation states finding a good balance. And even as we watch Facebook struggle to articulate a coherent position on political ads, we see nation states adopting awful regulations that serve only to censor their citizens. Here’s Eileen Yu from over the weekend in ZDNet

Singapore’s Ministry of Communications and Information (MCI) on Monday instructed Facebook to block access to the States Times Review (STR) page after the latter repeatedly refused to comply with previous directives issued under POFMA. The “disabling” order, outlined under Section 34 of the Act, requires Facebook to disable access for local users. […]

The spokesperson said: “We believe orders like this are disproportionate and contradict the government’s claim that POFMA would not be used as a censorship tool. We’ve repeatedly highlighted this law’s potential for overreach and we’re deeply concerned about the precedent this sets for the stifling of freedom of expression in Singapore.”

Among the stories that had outraged Singapore’s government was … a story about two critics of the government being arrested. (And it’s not just Singapore — see also these brand-new rules for social media in Pakistan .)

It’s easy to root for tech platforms to be regulated. It’s harder to accept that those regulations, when they finally do appear, are so often terrible.