879 students have sued Google for privacy intrusions into their mail correspondence when scanning GMail for advertising purposes. While a court in California allowed the lawsuit to proceed mid-August, the same court has now denied the students class action status, and instead require 876 of them to file individual lawsuits against Google. This case is interesting from all sorts of privacy angles.

Mid-August, the case was allowed to proceed, as Judge Lucy Koh rejected Google’s claims that automated reading of people’s private correspondence to target them for advertising was “business as usual” (the legal term is “within ordinary course of business”). However, the 879 students won’t be allowed class action status. In a second order issued just recently, Judge Koh agrees with Google’s argument that the class action status should be denied and the 879 students sent off to sue the GMail service provider on their own, with the exception of the original three plaintiffs, who are allowed to proceed jointly.

The first reaction to this is that the other 876 students are basically kicked out of court from the class-action strength-in-numbers construct with a smile and a “good luck” – but it’s not really that simple. The judge basically argued that while the 877 cases do have similarities in terms of the methodology and algorithms applied, the impact on the objective merits of the case in terms of consent given and possible damage from intrusion taken place can and would vary wildly between the different people involved:

Based on this information, Plaintiffs at each educational institution (1) might not have consented to Google’s scanning practices, (2) might have opted out of Google’s scanning practices by changing their Gmail settings or choosing a different email provider, or (3) might have consented to Google’s scanning practices. As Google notes, in order to defend itself, Google would need to undertake “a highly individualized” analysis to investigate, research, and present defenses as to each Plaintiff. […] This type of investigation made class treatment inappropriate.

However, this also means that an individual lawsuit may have an unforeseen strength that the class action would lack in trying to summarize the many cases, and thus, one of the 877 cases – were they to go ahead – may establish very important precedent with regards to expectations of privacy for communication service providers in the future. Interestingly, this development mirrors a 2014 case, also with judge Lucy Koh presiding, where class action status for the same alleged privacy damage against Google was denied on the exact same grounds. That lawsuit ended with Google settling with the plaintiffs and – allegedly – stopping the content scan of students’ mail correspondence.

Ray Gallo, who represents the 879 students, stated that he indeed intends to file 876 new lawsuits and proceed with the case, for 877 separate lawsuits in total.

There’s another interesting point here, which is the differentiation between technical scanning of correspondence and contextual-semantic scanning. The first is to protect the technical transmission network from harm, the second is to learn something about the contents of the correspondence. Some things obviously fall clearly into one of the two buckets – like a malware scan would fall squarely into the first bucket, and reading the correspondence to target advertising falls into the second. But there’s a huge gray area.

For example, where does spam protection fall, assuming there’s nothing technically malformed about a spam message? The mail provider would have to glean some understanding of the semantic content of the message to understand 1) that the Prince of Nigeria wants to pay off the US Federal Debt and wants this charity transaction to go through your bank account in exchange for 10% of the transaction value for your efforts, and 2) that this semantic meaning is highly improbable in a social context and therefore a scam. Today, this is done mostly via technical tricks of keyword weighting (Bayesian filters), but what about when spam filters improve to the point where they actually understand the contents of the correspondence, which is 20-minutes-or-so into the future? Judge Koh also mentions this briefly in the order.

Is reading somebody’s private correspondence okay just because it’s done by an artificial intelligence instead of an organic one?

As always, it’s useful to think in terms of Analog Equivalent Rights to determine a proper way forward for our children to inherit the rights of our parents:

Would it be acceptable for a physical mail carrier to open all letters, reading the contents (assuming the capability existed) and connect the senders and receivers with companies who wanted to sell things they seemed to be interested in, based on the contents of the message, if that reduced the cost of sending the physical letter?

Would it be acceptable for a phone network operator to listen in to all calls (again, assuming the capability existed) and occasionally put a sales representative from a discussed type of service on the line, if it reduced the phone bill?

Would it be acceptable for a café to offer coffee at reduced cost, if all the tables had microphones mounted underneath and every cup of coffee generated a visit from door-to-door salespeople making offers about the discussed topics?

This is the context we need to evaluate. What do we consider acceptable as we move from analog to digital?

Until our children have received the rights and liberties of our parents, privacy remains your own responsibility.