Mark Zuckerberg went on a media tour today to explain Facebook’s role in the Cambridge Analytica data scandal and what the company is going to do about it. He said more or less the same things to everyone from Recode to The New York Times to CNN, but one answer stuck out: in response to CNN’s Laurie Segall asking if he was worried about Facebook facing regulations from governments around the world, he replied. “I actually am not sure we shouldn’t be regulated. I think in general technology is an increasingly important trend in the world. I think the question is more what is the right regulation rather than ‘yes or no should we be regulated?’”

Following up, he said,

There is transparency regulation that I would love to see. If you look at how much regulation there is around advertising on TV and print it’s just not clear why there should be less on the internet. You should have the same level of transparency required. I don’t know if the bill is going to pass, I know a couple of senators are working really hard on this. But we’re committed and we’ve actually already started rolling out ad transparency tools that accomplish most of the things that are in the bills people are talking about today. This is an important thing. People should know who is buying the ads they see on facebook, and you should go to any page and see all the ads that people are running to different audiences.

Zuckerberg is apparently making reference to the Honest Ads Act, which would provide some of this transparency. But he didn’t mention things like GDPR in Europe, which radically changes the nature of how companies can use and store personal data.

“The question is more what is the right regulation?”

Speaking to Wired, Zuckerberg made a different kind of pivot around the same question, speaking more hypothetically, “There are some really nuanced questions, though, about how to regulate which I think are extremely interesting intellectually.” Let us gaze at this quote, wherein Mark Zuckerberg likens the distribution of hate speech on his platform to, um, dust in chicken processing plants:

My understanding with food safety is there’s a certain amount of dust that can get into the chicken as its going through the processing, and it’s not a large amount, it needs to be a very small amount, and I think there’s some understanding that you’re not going to be able to fully solve every single issue if you’re trying to feed hundreds of millions of people — or, in our case, build a community of 2 billion people — but that it should be a very high standard, and people should expect that we’re going to do a good job getting the hate speech out.

Zuckerberg also suggested to Wired that now that companies are better able to identify offensive content using AI, perhaps that leads to a level of regulated responsibility to pre-vet content on platform:

Now that companies increasingly over the next five to 10 years, as AI tools get better and better, will be able to proactively determine what might be offensive content or violate some rules, what therefore is the responsibility and legal responsibility of companies to do that? That, I think, is probably one of the most interesting intellectual and social debates around how you regulate this.

The issue, of course, is that it’s not just an “interesting intellectual and social debate.” It’s a real thing that’s affecting democracies and human lives around the world. When it came to pinning down what those regulations might look like, Zuckerberg said he thinks “guidelines are much better than dictating specific processes,” pointing to how he thought laws against hate speech in Germany “backfired.” It’s an interesting discussion, and well worth reading the Wired piece in full.

“There’s a certain amount of dust that can get into the chicken.”

In any event, with investigations looming around the world due to Cambridge Analytica, it’s clear that the question of further regulation is on the table.