Facebook has finally decided on a simple narrative to explain its lax response to protecting users’ data: It was so focused on making the world a better place that the company didn’t realize it was making it worse. “Facebook is an idealistic and optimistic company,” CEO Mark Zuckerberg wrote in prepared testimony that was released in advance of his appearance before Congress on Tuesday, a showdown that has been expected ever since it was revealed in March that the firm Cambridge Analytica had misused the personal data of tens of millions of Facebook users during the 2016 election. “For most of our existence, we focused on all the good that connecting people can bring. ... But it’s clear now that we didn’t do enough to prevent these tools from being used for harm as well.”

Zuckerberg’s prepared testimony tries to do a lot—including passing off most of the blame for the Cambridge Analytica scandal to Professor Aleksandr Kogan, who harvested Facebook data. But Facebook’s purported idealism is its center of gravity, the thesis around which the company’s other arguments revolve. The problem, Zuckerberg argues, isn’t Facebook per se. It’s that bad actors have abused the platform. To borrow a line from Bill Clinton, there is nothing wrong with Facebook that cannot be cured by what is right with Facebook. The important thing is to prevent hostile powers, scam artists, and opportunists from turning all Facebook’s wonderful tools—which are otherwise used to further peace, love, and understanding—against its users.

Zuckerberg has a lot to answer for. His massive social media network has been used to promote divisive, illiberal politicians. It has spread fake news, and abetted at least one genocide. Most of all, it has compromised the integrity of the personal data of its billion-plus users. Zuckerberg would like to characterize these scandals as unforeseen byproducts of an otherwise noble mission to make the world more open and connected. The task for senators is to expose Facebook’s core business, which is using the unprecedented amount of personalized data at its disposal to make money.

The actions that Facebook and Zuckerberg have taken since the Cambridge Analytica story broke last month have all been about obscuring this central truth—and keeping regulators at bay. Zuckerberg’s apology tour has highlighted Facebook’s role as a social network that brings people together. Anticipating the line he would take in his prepared testimony, Zuckerberg told reporters in a rare press conference in early April: “For the first decade, we really focused on all the good that connecting people brings. But it’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse. ... That goes for fake news, foreign interference in elections, hate speech, in addition to developers and data privacy. We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake.”

That laundry list of failings is extensive and damning. Zuckerberg’s strategy has been to assert that Facebook is making changes that will get it back to what it does best. The company has said that it will hire up to 20,000 people to focus on its data and security problems. It says it will limit the amount of information it will share with data and advertising firms. Zuckerberg has signaled support for the Honest Ads Act, which would regulate political ads on social media the way that they are regulated in other media (a move that is, as far as tech regulation goes, really the bare minimum). The company has also shown some willingness to apply the privacy standards that will go into effect in Europe next month to users around the world.