Exception tracking is a well-known tar baby of type system design. After all, if expressions can have two sorts of result, why shouldn’t the type say something about them both? Languages such as CLU, FX, and Java, to name three, provide “throws” or “raises” clauses to the types of procedures that specify an upper bound on the exceptions that can occur when they are called. It all seems natural and easy, but somehow the idea never really works very well. One culprit is any form of higher-order programming, which is inherent in object-oriented and functional languages alike. To handle the indirection requires more complex concepts, such as effect polymorphism, to make thing work reasonably well. Or untracked exceptions are used to avoid the need to track them. Somehow such an appealing idea seems rather stickier to realize than one might expect. But why?

A piece of the puzzle was put into place by Xavier Leroy and François Pessaux in their paper on tracking uncaught exceptions. Their idea was to move use type-based methods to track uncaught exceptions, but to move the clever typing techniques required out of the programming language itself and into a separate analysis tool. They make effective use of the powerful concept of row polymorphism introduced by Didier Rémy for typing records and variants in various dialects of Caml. Moving exception tracking out of the language and into a verification tool is the decisive move, because it liberates the analyzer from any constraints that may be essential at the language level.

But why track uncaught exceptions? That is, why track uncaught exceptions, rather than caught exceptions? From a purely methodological viewpoint it seems more important to know that a certain code fragment cannot raise certain exceptions (such as the exception in ML, which arises when a value matches no pattern in a case analysis). In a closed world in which all of the possible exceptions are known, then tracking positive information about which exceptions might be raised amounts to the same as tracking which exceptions cannot be raised, by simply subtracting the raised set from the entire set. As long as the raised set is an upper bound on the exceptions that might be raised, then the difference is a lower bound on the set of exceptions that cannot be raised. Such conservative approximations are necessary because a non-trivial behavioral property of a program is always undecidable, and hence requires proof. In practice this means that stronger invariants must be maintained than just the exception information so that one may prove, for example, that the values passed to a pattern match are limited to those that actually do satisfy some clause of an inexhaustive match.

How realistic is the closed world assumption? For it to hold seems to require a whole-program analysis, and is therefore non-modular, a risky premise in today’s world. Even on a whole-program basis exceptions must be static in the sense that, even if they are scoped, they may in principle be declared globally, after suitable renaming to avoid collisions. The global declarations collectively determine the whole “world” from which positive exception tracking information may be subtracted to obtain negative exception information. But in languages that admit multiple instantiations of modules, such as ML functors, static exceptions are not sufficient (each instance should introduce a distinct exception). Instead, static exceptions must be replaced by dynamic exceptions that are allocated at initialization time, or even run-time, to ensure that no collisions can occur among the instances. At that point we have an open world of exceptions, one in which there are exceptions that may be raised, but which cannot be named in any form of type that seeks to provide an upper bound on the possible uncaught exceptions that may arise.

For example consider the ML expression

let exception X in raise X end

If one were to use positive exception tracking, what would one say about the expression as a whole? It can, in fact it does, raise the exception , yet this fact is unspeakable outside of the scope of the declaration. If a tracker does not account for this fact, it is unsound in the sense that the uncaught exceptions no longer provide an upper bound on what may be raised. One maneuver, used in Java, for example, is to admit a class of untracked exceptions about which no static information is maintained. This is useful, because it allows one to track those exceptions that can be tracked (by the Java type system) and to not track those that cannot.

In an open world (which includes Java, because exceptions are a form of object) positive exception tracking becomes infeasible because there is no way to name the exceptions that might be tracked. In the above example the exception is actually a bound variable bound to a reference to an exception constructor. The name of the bound variable ought not matter, so it is not even clear what the exception raised should be called. (It is amusing to see the messages generated by various ML compilers when reporting uncaught exceptions. The information they provide is helpful, certainly, but is usually, strictly speaking, meaningless, involving identifiers that are not in scope.)

The methodological considerations mentioned earlier suggest a way around this difficulty. Rather than attempt to track those exceptions that might be raised, instead track the exceptions that cannot be raised. In the above example there is nothing to say about not being raised, because it is being raised, so we’re off the hook there. The “dual” example

let exception X in 2+2 end

illustrates the power of negative thinking. The body of the does not raise the exception bound to , and this may be recorded in a type that makes sense within the scope of . The crucial point is that when exiting its scope it is sound to drop mention of this information in a type for the entire expression. Information is lost, but the analysis is sound. In contrast there is no way to drop positive information without losing soundness, as the first example shows.

One way to think about the situation is in terms of type refinements, which express properties of the behavior of expressions of a type. To see this most clearly it is useful to separate the exception mechanism into two parts, the control part and the data part. The control aspect is essentially just a formulation of error-passing style, in which every expression has either a normal return of a specified type, or an exceptional return of the type associated to all exceptions. (Nick Benton and Andrew Kennedy nicely formulated this view of exceptions as an extension of the concept of a monad.)

The data aspect is, for dynamic exceptions, the type of dynamically classified values, which is written in PFPL. Think of it as an open-ended sum in which one can dynamically generate new classifiers (aka summands, injections, constructors, exceptions, channels, …) that carry a value of a specified type. According to this view the exception is bound to a dynamically-generated classifier carrying a value of unit type. (Classifier allocation is a storage effect, so that the data aspect necessarily involves effects, whereas the control aspect may, and, for reasons of parallelism, be taken as pure.) Exception constructors are used to make values of type , which are passed to handlers that can deconstruct those values by pattern matching.

Type refinements come into play as a means of tracking the class of a classified value. For the purposes of exception tracking, the crucial refinements of the type are the positive refinement, , and the negative refinement, , which specify that a classified value is, or is not, of class . Positive exception tracking reduces to maintaining invariants expressed by a disjunction of positive refinements; negative exception tracking reduces to maintaining invariants expressed by a conjunction of negative refinements. Revisiting the logic of exception tracking, the key is that the entailment

is valid, whereas the “entailment”

is not. Thus, in the negative setting we may get ourselves out of the scope of an exception by weakening the refinement, an illustration of the power of negative thinking.