What Happened?

On March 17, the New York Times revealed that Cambridge Analytica, the British data analysis firm with ties to Robert Mercer and Stephen K. Bannon and that was hired by the Trump campaign, “harvested private information from the Facebook profiles of more than 50 million users without their permission.” This set off a firestorm in the U.S. and the U.K. as regulators announced they would get to the bottom of what went wrong. Sen. Ron Wyden asked Facebook a series of hard-hitting questions. Massachusetts Attorney General Maura Healey announced an investigation into the matter, followed by the New York attorney general. And the U.K.’s information commissioner, Elizabeth Denham, said she would seek a warrant to search Cambridge Analytica’s computers. This in turn sent Facebook stock plunging—down nearly 7 percent by the market’s close on Monday, March 19 and down nearly another two points on Tuesday, March 20. On Monday night, the New York Times revealed that Facebook’s chief security officer, Alex Stamos, is stepping down after much internal disagreement with the way the firm handled concerns about misinformation in the 2016 elections.

A Breach Of Trust, If Not A Computer

The data that Cambridge Analytica obtained seems to have come from Aleksandr Kogan, a researcher at Cambridge University who convinced hundreds of thousands of Facebook users to take a Facebook-linked personality quiz—thereby granting Kogan access, through Facebook’s developer platform, to a treasure trove of user data. Kogan then shared this information with Cambridge Analytica. This was reported as a “breach” in the Times, which prompted security experts to explain that this was categorically distinct from the kind of breach that Equifax suffered, where an intruder used technical trickery to gain unauthorized access to the firm’s networks, unbeknownst to the firm.

Kogan’s access to the data (if not his later use) was known to Facebook and seemingly consistent with Facebook’s developer application programming interface (API) at the time. This is how Kogan was able to access 50 million user profiles through only a few hundred thousand quiz-takers. You take Kogan’s quiz, and a thousand of your closest friends are also scooped up.

Facebook’s chief security officer, Stamos, pointed this out in a now-gone tweet:

Here are @alexstamos now deleted tweets on the app Cambridge Analytica used to harvest millions of Facebook users' data. pic.twitter.com/jPwRHUyW1w — april glaser (@aprilaser) March 17, 2018

In other words: Don’t worry everyone, Cambridge Analytica didn’t steal the data; we were giving it out.

And they were. As Ben Thompson notes, an old Facebook developer page shows that their API would allow developers to access not only to user account information, but also huge amounts of friend account information—things like “friends_interests,” “friends_religion_politics” and much more:

Expect this developer page to come up again in potential litigation and legislative hearings. It shows that Kogan did not need to get Facebook data through the back door, because he could waltz in through the front door—the door Facebook built for developers. This was not a breach of Facebook’s network. But it was a breach of users’ trust, general expectations and perhaps also Facebook’s terms of service. (Indeed, Facebook’s deputy general counsel, Paul Grewal, has posted that the firm takes the position that Kogan’s sharing the data with Cambridge Analytica did violate the site’s platform policies.)

What Laws Might Apply?

There several laws that might plausibly give rise to legal claims against Facebook, Kogan or Cambridge Analytica. Without more information it is difficult to say which of these, if any, might actually lead to a viable legal claim, but each one merits further study. (I am leaving aside for now the potential claims under British and European law, but those add to this list considerably.)

1. Computer Fraud and Abuse Act

Perhaps the most maligned law in the world of technology policy, the Computer Fraud and Abuse Act (CFAA) is a mess. It provides criminal and civil penalties for unauthorized access to computer networks. The CFAA is the reason many lawyers say hack-backs are an unlawful response to a breach. And the CFAA was the blunt instrument used to prosecute the late Aaron Swartz for obtaining access “without authorization and in excess of authorized access” to JSTOR through the Massachusetts Institute of Technology’s network to download a large number of academic journal articles. So, can the CFAA be used to hold Cambridge Analytica or Kogan responsible for their actions?

It seems a stretch, principally because the statute focuses on “authorization.” Kogan had authorized access to the Facebook data he harvested. The Ninth Circuit recently held that “a defendant can run afoul of the CFAA when he or she has no permission to access a computer or when such permission has been revoked explicitly”—but not when that person merely overstays his or her welcome. While Kogan’s access to Facebook data has been revoked, it was not revoked prior to his authorized access.

The CFAA also penalizes a user who “exceeds authorized access”; this could get Kogan in trouble. But this term is defined in the statute as accessing “a computer with authorization and to use such access to obtain or alter information in the computer that the accesser [sic] is not entitled so to obtain or alter.” This seems an unlikely fit for Kogan’s actions because Kogan obtained access to user data in precisely the ways that Facebook’s API anticipates and did not gain access to anything more than what Facebook allowed. The fact that Kogan may have then gone on to use that data in ways that violated Facebook’s developer policies—as Facebook alleges—does not prove that he exceeded authorized access for the purposes of the CFAA. (Indeed, the Ninth Circuit recently held in Oracle v. Rimini that violating terms of service alone is not enough to hold someone criminally liable under the CFAA.)

Others with more expertise will hopefully weigh in on the possibility of CFAA charges against Kogan. But if charges are brought against Kogan, they will be precisely the kinds of charges that have led to calls to reform the CFAA, which many believe gives far too much power to an ambitious prosecutor to see a CFAA violation in any computer act.

2. State-level Computer Crime Laws

There are many different state computer abuse laws, but at least one law, the California Computer Data Access and Fraud Act (CDAFA), is worth reviewing. The CDAFA is similar to the CFAA in certain respects, but where the CFAA focuses on computer access, the CDAFA focuses on unauthorized use—either the unauthorized taking or misuse of information. The CDAFA states that a person is guilty of a public offense if she:

(1) Knowingly accesses and without permission alters, damages, deletes, destroys, or otherwise uses any data, computer, computer system, or computer network in order to either (A) devise or execute any scheme or artifice to defraud, deceive, or extort, or (B) wrongfully control or obtain money, property, or data. [or] (2) Knowingly accesses and without permission takes, copies, or makes use of any data from a computer, computer system, or computer network, or takes or copies any supporting documentation, whether existing or residing internal or external to a computer, computer system, or computer network. (Emphasis added.)

This language seems a better fit for Kogan’s actions, where he had exactly as much access as Facebook allowed, but used that data in ways that could be deceptive or fraudulent.

The CDAFA creates a cause of action for “the owner or lessee of the computer, computer system, computer network, computer program, or data who suffers damage or loss by reason of a violation.” One could imagine that Facebook and Facebook’s users fit this definition. The law carries criminal penalties—including felony charges leading to imprisonment for unauthorized computer access that causes damage greater than $5000.

But I’m not sure much will come of CDAFA claims. The recent decision in Oracle is also instructive here. While the trial court in that case found that a client’s use of Oracle’s product in violation of the terms of service could lead to penalties under the CDAFA, on appeal the 9th Circuit found that “taking data from a website, using a method prohibited by the applicable terms of use, when the taking itself generally is permitted, does not violate the CDAFA.”

In addition to laws about unauthorized access and use, some states also have breach notification laws. The California Data Breach Notification Law, for example, requires firms doing business in California involving user data to notify users in the event of a “breach of the security of the system.” But this is unlikely to apply here, given that Kogan gained access to Facebook data by asking users to authenticate his app—not by breaching any corporate networks.

3. U.S. Common Law Claims (Contract & Tort)

Facebook could bring a simple breach of contract claim or a tort claim against Kogan for fraudulently violating the website’s terms of service for developers. The firm suggests as much in its blog post on Friday, noting that Kogan “lied” to the firm. Which provision of Facebook’s developer terms did Kogan violate? My best guess is Paragraph 3.10:

Don't transfer any data that you receive from us (including anonymous, aggregate, or derived data) to any ad network, data broker or other advertising or monetization-related service.

If Facebook can prove a breach of contract claim, however, it is not clear what remedy the firm should obtain. One unfortunate possibility is that a court could merely prohibit Kogan from using Facebook.

Users too might sue Facebook for violating the promises the firm made to customers about their data, but the fact that users gave their Facebook data to Kogan coupled with this all-caps disclaimer from Facebook’s user terms-of-service make those claims unlikely to succeed, at least against Facebook:

WE TRY TO KEEP FACEBOOK UP, BUG-FREE, AND SAFE, BUT YOU USE IT AT YOUR OWN RISK. WE ARE PROVIDING FACEBOOK AS IS WITHOUT ANY EXPRESS OR IMPLIED WARRANTIES INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT. WE DO NOT GUARANTEE THAT FACEBOOK WILL ALWAYS BE SAFE, SECURE OR ERROR-FREE OR THAT FACEBOOK WILL ALWAYS FUNCTION WITHOUT DISRUPTIONS, DELAYS OR IMPERFECTIONS. FACEBOOK IS NOT RESPONSIBLE FOR THE ACTIONS, CONTENT, INFORMATION, OR DATA OF THIRD PARTIES, AND YOU RELEASE US, OUR DIRECTORS, OFFICERS, EMPLOYEES, AND AGENTS FROM ANY CLAIMS AND DAMAGES, KNOWN AND UNKNOWN, ARISING OUT OF OR IN ANY WAY CONNECTED WITH ANY CLAIM YOU HAVE AGAINST ANY SUCH THIRD PARTIES.

Since Kogan was both a user and a developer, these user terms of service apply to him as well. This might mean that Facebook will ask Kogan or Cambridge Analytica to indemnify the firm in the event of a lawsuit against the tech company.

One would need to spend more time with Facebook’s terms of service to know if these contract and tort claims have legs, but I expect that many enterprising lawyers are doing just that.

4. Federal Trade Commission Rules

On Tuesday, March 20, Bloomberg reported that the Federal Trade Commission is investigating whether Facebook violated the terms of a 2011 consent decree between the firm and the FTC. In 2011, Facebook and the FTC entered into an agreement in response to agency complaints about the online firm’s privacy practices. The settlement requires Facebook to be more transparent about user privacy policies and to not deceive users about how their data will be used. Penalties for violating the decree can rise to $40,000 per day per violation. David Vladeck, who was the director of the FTC’s Bureau of Consumer Protection at the time of the consent decree, told the Washington Post over the weekend that Facebook’s practices may amount to a violation of the decree.

5. U.S. Securities Law

Another potentially relevant body of laws is U.S. securities law. The Securities and Exchange Act of 1934 requires publicly traded companies to disclose to shareholders “material information,” the kind of information that a reasonable investor might want to know about the company. The SEC has issued guidance on public reporting of cybersecurity incidents, noting that the commission “encourages companies to continue to use Form 8-K or Form 6-K to disclose material information promptly, including disclosure pertaining to cybersecurity matters.”

Reviewing Facebook’s 8-K and 6-K filings for 2014 and 2015, I did not find any mention of the Cambridge Analytica incident. Facebook mentions data breaches as a general risk factor in their most recent 10-K annual report, but I did not find any filings that mentioned this specific incident. If it turns out that Facebook did not tell investors or the SEC about this leak, why didn’t they? Facebook will likely reiterate the line that this was not a breach, in the sense of someone penetrating the corporate network a la Target or Equifax. But that does not mean that investors or the SEC will not consider this material information. Indeed, the stock movement on the markets over the last few days suggests this is the kind of information shareholders very much would like to know. (Stock movement is not definitive proof of this, let alone much else, but it can be indicative.) And the SEC will likely look into it. Kyle DeYoung, a former senior counsel at the SEC, told the Financial Times that “Just because it wasn’t a true hack doesn’t mean [the SEC is] not going to conclude it was a material event.” Again, this is something plaintiffs’ lawyers likely spent the weekend reading up on.

There are many other areas of law that might come into play here—including perhaps most immediately British and European rules. (Developments in other jurisdictions are unfolding quickly: The United Kingdom Information Commissioner’s Office is reportedly in pursuit of a warrant to conduct an on-site investigation.) The list of potential U.K. and EU causes of action is likely extensive and that should tell you just what a mess Facebook, Kogan and Cambridge Analytica are in.

Barriers To Investigation

There are at least two serious barriers to this sort of investigation: physical evidence and jurisdiction. The physical evidence of what was transferred from Facebook to Kogan and on to Cambridge Analytica is a matter of forensics. Of course, it is common for some historical account data to be kept, but it is far from guaranteed that the data would be kept in ways that will be useful in a courtroom. That is assuming that no data has been destroyed in the wake of the recent news, which would constitute its own crime. (There were ominous reports that Facebook sent cybersecurity auditing firm Stroz Friedberg to Cambridge Analytica’s offices on Monday night, and that they were subsequently kicked out by British authorities.)

Then there are the potential jurisdictional barriers to any investigation into this cross-border computer issue. As mentioned above, Britain’s information commissioner, Elizabeth Denham, plans to seek a warrant for Cambridge Analytica’s computers. If those computers are all in the U.K., the warrant may be sufficient to yield useful evidence. But if Denham seeks a warrant for Facebook’s computers, she may face the same question now before the Supreme Court in the Microsoft Ireland case: whether and to what extent a warrant for computer data in one country reaches data held on servers in another country. U.S. investigations into Facebook’s conduct may face similar challenges vis-a-vis Cambridge Analytica’s foreign-held data. You should also expect that some data will be passed across the Atlantic in accordance with the U.S.-U.K. mutual legal assistance treaty.

What To Expect Next

If you’re Kogan, or Cambridge Analytica, expect lawsuits, public hearings and general regulatory hell. Maybe, in the extreme, jail time. If you’re Facebook, expect lawsuits, public hearings, and general regulatory hell. Maybe, in the extreme, the end of the firm as we know it.

Facebook is hoping to pin this on two bad apples: Kogan and Cambridge Analytica. And bad apples they were. But this is a dangerous strategy. For Facebook, the claim that it was always upfront about how user data might end up in developer hands is a strategy that wins the battle but loses the war. If users and regulators decide that the firm did not do anything out of the ordinary—that this is just the way Facebook works—they may reasonably conclude that the firm itself is unacceptable. The EU’s top privacy regulator, Věra Jourová, announced on Monday that she had reached out to Facebook and that “From a European Union perspective, the misuse for political purposes of personal data belonging to Facebook users -- if confirmed -- is not acceptable." This does not sound like the kind of small-bore complaint about a one-off problem, but rather the threat of a major reckoning.

It sounds like the kind of thing that should make Facebook want to hire really good antitrust lawyers.

Editor's Note: This piece was edited on March 20 to clarify language concerning the disclosure of Facebook "login" information.