If there remained any doubt that Facebook’s business practices intentionally compromise users’ privacy and recklessly undermine democratic norms, it was put to rest on Monday, when the Digital, Culture, Media and Sport Committee of the British House of Commons issued a hundred-and-eight-page report, incongruously titled “Disinformation and ‘fake news.’ ” In a drama that played out over a few days in November, the committee’s chair, Damian Collins, a Tory M.P., had outwitted Facebook’s legal team when he summoned an American app developer named Ted Kramer to Parliament. At the time, Kramer’s company, Six4Three, was embroiled in a lawsuit with Facebook, and the documents that he just happened to have access to while on a business trip to England—and which Collins just happened to know about—were obtained during the discovery process. Although the documents were under seal in the United States, Collins claimed that they were fair game in the U.K., and threatened to arrest Kramer if he didn’t turn them over. Their contents are incorporated into Monday’s report, which gets at its nominal subject—the dissemination of propaganda and intentionally divisive content on social media—by unmasking the ways that Facebook, in particular, has facilitated it.

Six4Three sued Facebook for breach of contract, in 2015, after the social network cut off the company’s access to the profiles of Facebook users’ friends. Six4Three’s Pikinis app, which prospected for photographs of women in bikinis from pictures shared on Facebook, stopped working after the data stream dried up. At the time, Facebook claimed that its new policy was intended to protect users’ privacy. It might have done so, too, if the company had adhered to it. But, as Kramer and his lawyers found out during the discovery process, Facebook continued to allow certain “white-listed” companies—Netflix, Airbnb, and Lyft, among them—to harvest friends’ data. Facebook considered giving companies this access as long as they spent at least two hundred and fifty thousand dollars a year advertising on the platform or if they had something of value they could trade in exchange. The report cites an e-mail from March, 2015, in which Facebook executives discuss giving Tinder white-listed status in return for the use of the term “Moments,” which Facebook subsequently named its new photo-sharing app, three months later. “The idea of linking access to friends’ data to the financial value of the developers’ relationship with Facebook was a recurring feature of the documents,” the report explains.

The report also reveals that Facebook executives conspired to bait Android phone users into agreeing to hand over their text messages and call logs during a software upgrade, even though it was “a pretty high-risk thing to do from a PR perspective,” Michael LeBeau, a Facebook product manager, wrote in an e-mail. Facebook’s public-relations strategy, according to the report’s authors, was “to make it as hard as possible for users to know that this was one of the underlying features of the upgrade of their app.”

Additionally, a V.P.N. app that Facebook bought from Israeli developers, which ostensibly intended to keep users’ browsing activities from being tracked, was actually sharing their Web wanderings with Facebook, so that the company could “gain insights into the products and services people value, and build better experiences.” Those insights included identifying the apps and services that were most popular among Facebook users, which gave the company a predatory advantage: its executives could either acquire those apps, as they did with WhatsApp and Instagram, or they could simply shut them down, as they did with the video-advertising system LiveRail. This leads Collins and his colleagues to wonder if Facebook might be exposed to a racketeering lawsuit in the United States. Historically, the RICO Act has been used to go after mobsters, not tech platforms or their executives. But, early in the report, the authors call Facebook’s chairman and C.E.O., Mark Zuckerberg, and his team “digital gangsters,” so there’s a certain logic to it.

The chicanery of Facebook executives, combined with their allegiance to profit over people and their continued insistence that Facebook is merely a neutral platform, created the conditions through which so much propaganda has been disseminated online. Cambridge Analytica, which purchased the data of eighty-seven million Facebook profiles from a third-party app developer to use on behalf of the Ted Cruz and Donald Trump Presidential campaigns, was enabled by Facebook—the company not only supplied the raw material for Cambridge Analytica’s algorithms but was also the medium for its targeted messaging. Some of that messaging was intended to discourage certain Americans from voting and to inject discord into the electorate. Facebook, too, along with Twitter and Instagram, was an ideal vector for similar work undertaken by the Russian Internet Research Agency. In September, 2017, Facebook’s then chief security officer, Alex Stamos, told Facebook executives that malign Russian-state actors were still active on the site—information that those executives chose to ignore, according to an investigation by the New York Times. A few months later, other members of Facebook’s management appeared to have, as the report puts it, “misled” the Collins committee when they testified that outside agents were not using the platform to influence elections.

Facebook was also used to surreptitiously influence the Brexit vote in the U.K. In arcane detail, the commission explains why it believes an obscure Canadian company called Aggregate I.Q. (A.I.Q.) harvested Facebook users’ profiles and linked them to voter files in order to “precisely target” them with pro-Leave messages. According to the report, “The work of [A.I.Q.] highlights the fact that data has been and is still being used extensively by private companies to target people, often in a political context, in order to influence their decisions. It is far more common that people think.” Indeed, in a ten-month period during 2018, an anonymous, “highly misleading,” pro-Brexit Web site called Mainstream Network spent an estimated two hundred and fifty-seven thousand pounds on Facebook ads that reached nearly eleven million users. “Mainstream Network is yet another, more recent example of an online organisation [sic] seeking to influence political debate . . . and there is no good case for [it] to hide behind anonymity,” the report says. Zuckerberg’s proxy in London, Richard Allan, has not responded to the committee’s requests to reveal who is behind Mainstream Network—which is to say, who is paying for its Facebook ads—but he has promised to do so. “We consider Facebook’s response generally to be disingenuous and another example of Facebook’s bad faith,” the report’s authors wrote.

In response to the report, Karim Palant, Facebook’s public-policy manager in the U.K., released a statement, which said, “We are open to meaningful regulation and support the committee’s recommendation for electoral law reform. . . . No other channel for political advertising is as transparent and offers the tools that we do. We also support effective privacy legislation that holds companies to high standards in their use of data and transparency for users.” Palant added, “While we still have more to do, we are not the same company we were a year ago.”

If a RICO prosecution seems unlikely, how will governments hold Facebook to account? Facebook is global and the Internet is borderless, but laws are not. Strict new data-protection laws in California, for example, which, among other things, empower residents to ask for their data to be erased and object to its sale, stop at the state line. The General Data Protection Regulation issued by the European Union, which went into effect last year, only covers E.U. residents. And as comprehensive as it is, the G.D.P.R. doesn’t cover “inferred data”—the assumptions made about individuals by computer models that are then passed on to advertisers. One popular example of this is Facebook’s “lookalike audiences” tool, which categorizes people on the basis of their similarity to others with analogous interests and traits. The Los Angeles Times reported on Thursday that Facebook continues to allow advertisers to search for and target “hundreds of thousands of users” who “the social media firm believes are curious about topics such as ‘Joseph Goebbels,’ ‘Josef Mengele,’ ‘Heinrich Himmler,’ the neo-nazi punk band Skrewdriver and Benito Mussolini’s long-defunct National Fascist Party.”