by

I wrote yesterday about accountable algorithms. When I say that a public algorithm is “accountable” I mean that the output produced by a particular execution of the algorithm can be verified as correct after the fact by a skeptical member of the public. Today I want to work through an example.



Consider a simplified airport security algorithm where each person who goes through a security checkpoint is selected for enhanced search with 5% probability. We want to make sure that a person who is selected can verify that they really were picked by a fair random process, and not because of bias or some other off-the-books reason.

Of course, we don’t want to tell people beforehand whether they are going to be searched. For security reasons, we want the subject to be surprised by the search, finding out about the search only when they arrive at the checkpoint. The verification will happen afterward, perhaps the next day–so we are not preventing biased or incorrect choices, only detecting them.

To solve this problem, we need to use two cryptographic operations: a commitment and a selection function. I’ll give an intuitive explanation of what they do, but I won’t describe the underlying math–I assume the cryptographers among you will be able to figure that out for yourselves.

A commitment is like a sealed envelope. A party can “commit” to a value, which is like writing that value on a piece of paper, sealing the paper in an envelope, and placing the envelope in public view where nobody can tamper with it. Later, the party who sealed the envelope can “open” it, which reveals the value that was initially sealed. The key property of a commitment is that the value, once sealed, cannot be changed.

A selection function is a way of picking items. We’ll write it as S(k,n), where “k” is a key and “n” is a name. Given a particular key and name, the selection function says “yes” or “no”. The function’s behavior, intuitively, is rather like a random 5% choice among names–if you pick a key k at random, then S(k,n) will say “yes” for a “random-looking” 5% of names. Think of the selection function as knowing a great many random-looking methods for picking names, and the key as choosing which of those many methods to use on a particular day. The selection function is public and known to everybody (but the key might be secret).

Now we can create our accountable selection method. First thing in the morning, before the security checkpoint opens, the TSA picks a random value R and commits it. Now the TSA knows R but the public doesn’t. Immediately thereafter, TSA officials roll dice, in public view, to generate another random value Q. Now the TSA adds R+Q and makes that sum the key K for the day.

Now, when you arrive at the checkpoint, you announce your name N, and the TSA uses the selection function to compute S(K, N). The TSA announces the result, and if it’s “yes,” then you get searched. You can’t anticipate whether you’ll be searched, because that depends on the key K, which depends on the TSA’s secret value R, which you don’t know.

At the end of the day, the TSA opens its commitment to R. Now you can verify that the TSA followed the algorithm correctly in deciding whether to search you. You can add the now-public R to the already-public Q, to get the day’s (previously) secret key K. You can then evaluate the selection function S(K,N) with your name N–replicating the computation that the TSA did in deciding whether to search you. If the result you get matches the result the TSA announced earlier, then you know that the TSA did their job correctly. If it doesn’t match, you know the TSA cheated–and when you announce that they cheated, anybody can verify that your accusation is correct.

This method prevents the TSA from creating a non-random result. The reason the TSA cannot do this is that the key K is based on result of die-rolling, which is definitely random. And the TSA cannot have chosen its secret value R in a way that neutralized the effect of the random die-rolls, because the TSA had to commit to its choice of R before the dice were rolled. So citizens know that if they were chosen, it was because of randomness and not any TSA bias.

This is just one example of an algorithm that can be made accountable. But it turns out that a very broad class of algorithms can be made accountable. For example, other randomized algorithms can be made accountable by applying the same basic method I used here, and algorithms with secret inputs can be made accountable by using other cryptographic methods, which I might describe in a later post.

The bottom line is that many algorithms of public interest can be made accountable. The cost of doing so is not zero–any change has a cost, and accountable versions of an algorithm are sometimes less efficient than an unaccountable version–but it’s useful to know that accountability is possible.