By Jerri-Lynn Scofield, who has worked as a securities lawyer and a derivatives trader. She is currently writing a book about textile artisans.

Writing in today’s NYT, I Got Access to My Secret Consumer Score. Now You Can Get Yours, Too., Kashmir Hill outlined how to get access to your secret consumer scores:

As consumers, we all have “secret scores”: hidden ratings that determine how long each of us waits on hold when calling a business, whether we can return items at a store, and what type of service we receive. A low score sends you to the back of the queue; high scores get you elite treatment.

In the run-up to the implementation of California’s Data Privacy Act on 1st January 2020, some companies that compile and sell consumer data are making it easier for US consumers to access their data (for background, see California Privacy Law Looms). Prior to California’s action, the EU implemented its General Data Protection Regulation, in 2018. The NYT notes:

…Some companies have decided to honor the laws’ transparency requirements even for those of us who are not lucky enough to live in Europe or the Golden State. “We expect these are the first of many laws,” said Jason Tan, the chief executive of Sift. The company, founded in 2011, started making files available to “all end users” this June, even where not legally required to do so — such as in New York, where I live. “We’re trying to be more privacy conscious. We want to be good citizens and stewards of the internet. That includes transparency.”

How to Get Your Data

First things first. The NYT provides information on how you can get those data and I include it here, for readers who want to go that route:

There are many companies in the business of scoring consumers. The challenge is to identify them. Once you do, the instructions on getting your data will probably be buried in their privacy policies. Ctrl-F “request” is a good way to find it. Most of these companies will also require you to send a photo of your driver’s license to verify your identity. Here are five that say they’ll share the data they have on you. Sift, which determines consumer trustworthiness, asks you to email privacy@sift.com. You’ll then have to fill out a Google form.

Zeta Global, which identifies people with a lot of money to spend, lets you request your data via an online form.

Retail Equation, which helps companies such as Best Buy and Sephora decide whether to accept or reject a product return, will send you a report if you email returnactivityreport@theretailequation.com.

Riskified, which develops fraud scores, will tell you what data it has gathered on your possible crookedness if you contact privacy@riskified.com.

Kustomer, a database company that provides what it calls “unprecedented insight into a customer’s past experiences and current sentiment,” tells people to email privacy@kustomer.com.

Despite the jocular tone of the NYT article, what it reveals about the extent to which companies compile and analyze data gleaned from our on-line transactions. But that fact shouldn’t come as any great surprise to readers of this site (or anyone who’s been paying attention, for that matter).

I’m not going to linger on that point.

Instead I want to say, okay, you have the data. So what? That raw data alone doesn’t really get you very far.

The more pressing concern: What do the companies do with the data? In other words, how do they transform raw data into my secret score? Just what exactly is inside the black box?

This is exactly the question asked by Laura Antonini, the policy director at the Consumer Education Foundation (CEA), and a co-author of a June report with CEA president Harry Rosenfield. According to the Grey Lady,

[The CEA] wants the Federal Trade Commission to investigate secret surveillance scores “generated by a shadowy group of privacy-busting firms that operate in the dark recesses of the American marketplace.” The report named 11 firms that rate shoppers, potential renters and prospective employees. “I don’t really care that these data analytics companies know I made a return to Victoria’s Secret in 2009, or that I had chicken kebabs delivered to my apartment, but how is this information being used against me when you generate scores for your clients?” Ms. Antonini said. “That is what consumers deserve to know. The lack of the information I received back is the most alarming part of this.” In other words, most of these companies are just showing you the data they used to make decisions about you, not how they analyzed that data or what their decision was.

Omnipotent, Mysterious Black Boxes

Outsourcing a crucial decision to a mysterious black box should raise a large red flag – especially when juxtaposed against another NYT article, this one from yesterday, These Machines Can Put You in Jail. Don’t Trust Them.

The NY Times conducted an extensive investigation of the use of breathalyzers to measure the concentration of alcohol in the blood. One cannot refuse to take such a test in any state; and if the machine registers a blood alcohol level of greater than 0.08%, one’s virtually certain to be convicted of a drunk driving offense. Over to the NYT:

But those tests — a bedrock of the criminal justice system — are often unreliable, a New York Times investigation found. The devices, found in virtually every police station in America, generate skewed results with alarming frequency, even though they are marketed as precise to the third decimal place. Judges in Massachusetts and New Jersey have thrown out more than 30,000 breath tests in the past 12 months alone, largely because of human errors and lax governmental oversight. Across the country, thousands of other tests also have been invalidated in recent years. The machines are sensitive scientific instruments, and in many cases they haven’t been properly calibrated, yielding results that were at times 40 percent too high. Maintaining machines is up to police departments that sometimes have shoddy standards and lack expertise. In some cities, lab officials have used stale or home-brewed chemical solutions that warped results. In Massachusetts, officers used a machine with rats nesting inside. Technical experts have found serious programming mistakes in the machines’ software. States have picked devices that their own experts didn’t trust and have disabled safeguards meant to ensure the tests’ accuracy. The Times interviewed more than 100 lawyers, scientists, executives and police officers and reviewed tens of thousands of pages of court records, corporate filings, confidential emails and contracts. Together, they reveal the depth of a nationwide problem that has attracted only sporadic attention. A county judge in Pennsylvania called it “extremely questionable”whether any of his state’s breath tests could withstand serious scrutiny. In response, local prosecutors stopped using them. In Florida, a panel of judges described their state’s instrument as a “magic black box” with “significant and continued anomalies.” Even some industry veterans say the machines should not be de facto arbiters of guilt. “The tests were never meant to be used that way,” said John Fusco, who ran National Patent Analytical Systems, a maker of breath-testing devices.

I could go on, but won’t. You get the point.

Data Privacy Protection: What Is to Be Done

Now, obviously, machines and algorithms have their place.

But they need to be understood, maintained, and regulated. Any time a decision is outsourced to a black box, and oversight surrendered….watch out!

Cave! Hic dragones.

Back to that June CEA report. #REPRESENT, a public interest group created by the CEA has asked the Federal Trade Commission to investigate and stop illegal surveillance scoring.

According to a June account in the Hill, Advocates push FTC crackdown on secret consumer scores:

The complaint comes as lawmakers are increasingly scrutinizing major technology companies over their handling of user data. Facebook and Google have received the brunt of Washington’s attention because of their massive size and ability to microtarget advertisements based on their users’ behavior. But #REPRESENT is hoping to shine a light on a part of the world of unregulated data collection that has received relatively little attention and has the potential to enable companies to discriminate against consumers on a massive scale. “The ability of corporations to target, manipulate and discriminate against Americans is unprecedented and inconsistent with the principles of competition and free markets,” the complaint reads. “Surveillance scoring promotes inequality by empowering companies to decide which consumers they want to do business with and on what terms, weeding out the people who they deem less valuable. Such discrimination is as much a threat to democracy as it is to a free market.”

The surveillance score resembles the much-maligned credit core. Over to The Hill

But unlike credit scores, there’s no transparency for consumers, and Rosenfield and Antonini argue that companies are using them to engage in illegal discrimination while users have little recourse to correct false information about them or challenge their ratings.

Jerri-Lynn here. Ha! I wouldn’t hold out the credit scoring system as a model for anything. Off the top of my head, just a smattering of its defects includes: It’s far from transparent, credit reports frequently contain errors – which are time-consuming to correct – and the system is vulnerable to hacking (see Biometric ID Fairy: A Misguided Response to the Equifax Mess that Will Only Enrich Cybersecurity Grifters and Strengthen the Surveillance State.)

So, by all means, request your data. But far, far more is needed to protect data privacy throughout the United States.