$\begingroup$

Efficiency of optlam

I have not studied the details of BADTERM nor of the implementation of optlam evaluator, but I find quite strange that optlam performs a number of ß-interactions drastically different than another optimal evaluator like BOHM. Such a number must be, by definition, basically the same on a given term. Are you sure of the correctness of optlam's core?

Efficiency of optimal evaluators

Recall that the notion of optimality of these evaluators is more properly known as Lévy-optimality, and it is not the naive one, since a strategy of reduction performing the minimum number of ß-steps is not computable. What is minimised, then, is the number of parallel ß-reduction steps performed on a whole family of redex, that is roughly the set obtained by the symmetric and transitive closure of the relation which binds two redexes when one is copied from the other. It should in general not surprise to see discrepancies between the number of ß-steps and the rest of duplication-steps, since we know that most of the normalisation load could be transferred from the former to the latter, as shown by Asperti, Coppola and Martini [1].

It should not surprise us either to see that total number of interactions needed to normalise an term with an optimal evaluator is that lower than with an ordinary one, since previous empirical observation already showed notable performance improvements. In spite of this, such a huge complexity jump, from exponential to linear time, is perhaps the very first of its kind being being discovered, though. (I will check this.)

On the other hand, the theoretical results about the efficiency of optimal reduction (which is your big question), are still few and not yet general, since they are limited to EAL-typed proof-nets (which is basically the same restriction of optmal evaluator, if I correctly understand), but all are mildly positive, since in the worst case the complexity of sharing reduction is bounded by the ordinary one by a constant factor [2,3].

References