I have recently become fascinated, not to say obsessively fixated, on what some call computational classical type theory. This is the kind of type theory one gets by applying the Curry-Howard isomorphism to proof systems for classical logic. There are many variants, and Harley Eades, who introduced me to this topic, has written a lot about them on his blog. These systems extend lambda calculus with some control operators (like try and catch for exceptions, or generalizations of these). For example, a proof of works by providing a function from to , where if that function is ever called with a proof of , we go back in time (by raising an exception) to the point where the proof of was consulted, and at that point return that proof of , instead of the function from to . Sound evil? It is. In fact, in this post I want to highlight just how bad the situation is. But this is not a rant: I am firmly convinced that classical reasoning is needed for reasoning about general-recursive functions (think case-splitting on whether or not a given general-recursive term terminates). But we are not going to be able to make use of these powerful and exciting ideas if we cannot tame the non-canonicity of these theories, which I will discuss next.

When we program in a pure functional language, say, we usually intend that at some point our program will produce some observable effect. We are not writing programs just for the sake of knowing they are running. We want them to compute answers to some computational question. So we need to take some kind of observation of the results (assuming convergence) of our program.

Now, we know that in classical logic the so-called disjunction property fails. That is, we can prove, from no assumptions, a formula without in general having a proof of A independent of B, or else of B independent of A. For a “classic” example, we can prove without thereby having a proof of by itself or else . So we cannot observe a canonical result of type in general, where a canonical result of a type is either or , with a canonical result of type and a canonical result of type .

What is the big deal with that? Probably we were not expecting to observe results at -type. But of course, we could compute some value at a type we do want to observe, like (for natural numbers), based on a case split on a non-canonical result of type . These type theories for classical logic are quite amazing: they will carefully track and propagate that non-canonicity throughout computations. So if we perform such a case-split in computing a , we would expect to end up with another non-canonical result, which would then not be suitable for observation.

Well darn, you might say. Sometimes we might try to compute a number and end up with a non-canonical result which we cannot observe. Perhaps we could just report an error to the user in that case: “non-canonical result encountered, cannot observe”. That would be unworkably bad, in my opinion. But it gets worse.

It can actually happen (I believe: any experts reading please check me on this) that we can compute an incorrect canonical result, based on case-splitting with a non-canonical term of type . This is really shocking, and renders the idea of computational classical type theory unusable in practice without major modifications (some of which have been undertaken by Herbelin, Krebbers, and others in work to preserve canonicity at various types, while still allowing classical reasoning). Here is a sketched example.

I am assuming you are familiar with the non-constructive proof of the fact that there are two irrationals and such that is rational (you can read this proof on Wikipedia here). This proof does a case-split on whether is rational or not. Let us recast this proof ever so slightly, so that the case-split is on whether that quantity is irrational or not. As I described at the start of this post, the proof of in classical logic returns a proof of (the second branch of the disjunction) which, if it is ever used to derive a contradiction with a proof of , will throw that proof of back to the original point the proof of was used, to serve instead as a proof of the first branch of the disjunction. Ok. But what this means is that when you case-split on this proof of , that proof essentially tells you that the second disjunct is the true one. So if we case split on whether or not is irrational, computationally we will enter the second branch of our proof. And in that branch, we know that is rational, and that is what (computationally) we will return from our proof, together with the proof that it is rational.

There is only one problem with this. As stated on the above linked Wikipedia page, it turns out that is actually irrational. Now, the proof of rationality that we returned along with this value is such that if we ever contradict it using a proof that the quantity in question is irrational, we will go back in time to our original case-split, and produce instead the answer that it is and that are the irrationals and such that is rational. So again, the logic very carefully tracks the non-canonicity of our proof, and will go back and repair the situation if choosing the second branch of turns out to have been the wrong choice.

But this is small consolation to the programmer who is given and together with the non-canonical proof that is rational, and takes an observation of those values (prints them out, say). As soon as we need to observe our answers, we depart from the magical world of classical logic where everything is revisable, and commit ourselves to a particular value. It is well known that there may be no value to commit to, due to non-canonicity, when taking such a measurement. What the above example shows (assuming I am right about this) is that one can actually commit to the wrong value. This is worse than aborting with the “non-canonical result” error message. The source of the problem is that we have a non-canonical constraint on our value which needs to be maintained, but taking an observation severs the value from that constraint.

To conclude: we absolutely have to be able to carve out a safe fragment of a computational classical type theory, where what you see (via typing) is truly what you get, in order for such systems to be even remotely usable in practice. Some attempts (briefly mentioned above) are being made, but there is no fully satisfactory solution yet available, in my opinion. Such a solution would use the type system to track canonicity, so one would know whether it is legal to observe a particular value or not. Values that cannot be observed can still be used for formal reasoning. But as we have seen, they lead to disaster if allowed out of the mathematics school room and onto the computational playground.