

De­function­al­iz­a­tion and Java

A couple of weeks ago I was introduced to the notion of defunctionalization by this article on Ken Knowles' blog. Defunctionalization is a program transformation that removes the higher-order functions from a program. The idea is that you replace something like λx.x+y with a data structure that encapsulates a value of y somewhere, say (HOLD y). And instead of using the language's built-in function application to apply this object directly to an argument x, you write a synthetic applicator that takes (HOLD y) and x and returns x + y. And anyone who wanted to apply λx.x+y to some argument x in some context in which y was bound should first construct (HOLD y), then use the synthetic applicator on (HOLD y) and x. Consider, for example, the following Haskell program: -- Haskell aux f = f 1 + f 10 res x = aux (λz -> z + x) The defunctionalization of this example is: -- Haskell data Hold = HOLD Int fake_apply (HOLD a) b = a + b aux held = fake_apply held 1 + fake_apply held 10 res x = aux (HOLD x) I hope this will make the idea clear. M. Knowles cites the paper Defunctionalization at work by Olivier Danvy and Lasse R. Nielsen, which was lots of fun. (My Haskell example above is a simplification of the example from page 5 of Danvy and Nielsen.) Among other things, Danvy and Nielsen point out that this defunctionalization transformation is in a certain sense dual to the transformation that turns ordinary data structures into λ-terms in Church encoding. Church encloding turns data items like pairs or booleans into higher-order functions; defunctionalization turns them back again. Section 1.4 of the Danvy and Nielsen paper lists a whole bunch of contexts in which this technique has been studied and used, but one thing I didn't think I saw there is that this is essentially the transformation that Java programmers use when they want to use closures. For example, suppose a Java programmer wants to write something like aux in: -- Haskell aux f = f 1 + f 10 res x = aux (λz -> z + x) But they can't, because Java doesn't have closures. So instead, they do this: /* Java */ class Hold { private int a; public Hold(int a) { this.a = a; } public int fake_apply(int b) { return this.a + b; } } private static int aux(Hold h) { return h.fake_apply(1) + h.fake_apply(10); } static int res(int x) { Hold h = new Hold(x); return aux(h); } Where the class Hold corresponds directly to the data type Hold in the defunctionalized Haskell code. Here is a real example. Consider GNU Emacs. When I enter text-mode in Emacs, I want a bunch of subsystems to be notified. Emacs has a text-mode-hook variable, which is basically a list of functions, and when an Emacs buffer is put into text-mode, Emacs invokes the hooks. Any subsystem that wants to be notified puts its own hook function into that variable. If I wanted to accomplish something similar in Haskell or SML, I would similarly use a list of functions. In Java, the corresponding facility is called java.util.Observable . Were one implementing Emacs in Java (perish the thought!) the mode object would inherit from Observable , and so would provide an addObserver method for adding a hook to a list somewhere. When the mode was switched to text-mode, the mode object would call notifyObservers , which would loop over the hook list, calling the hooks. So far this is just like Emacs Lisp. But in Java the hooks are not functions, as they are in Emacs, because in Java functions are not first-class entities. Instead, the hooks are objects which conform to the Observer interface specification, and instead of invoking functions directly, the notifyObservers method calls the update method on each hook object. Here's another example. I wrote a recursive descent parser in Java a while back. An ActionParser is just like a Parser , except that if its parse succeeds, it invokes a callback. If I were programming in SML or Haskell or Perl, an ActionParser would be nothing but a Parser with an associated closure, something like this: # Perl package ActionParser; sub new { my ($class, $parser, $action) = @_; bless { Parser => $parser, Action => $action } => $class; } # Just like the embedded parser, but invoke the action on success sub parse { my $self = shift; my $input = shift; my $result = $self -> {Parser} -> parse($input); if ($result -> success) $self -> {Action} -> ($result); # Invoke action } return $result; } Here the Action member is expected to be a closure, which is automatically invoked if the parse succeeds. To use this, I would write something like this: # Perl my $missiles; ... my $parser = ActionParser -> new($otherParser, sub { $missiles -> launch() } ); $parser -> parse($input); And then if the input parses correctly, the parser launches the missiles from the anonymous closure, which has captured the local $missiles object. But in Java, you have no closures. Instead, you defunctionalize, and represent closures with objects: /* Java */ abstract class Action { void invoke(ParseResults results) {} } class ActionParser extends Parser { Action action; Parser parser; ActionParser(Parser p, Action a) { action = a; parser = p; } ParseResults Parse(Input input) { ParseResults res = this.parser.Parse(input); if (res.isSuccess) { this.action.invoke(res); } return res; } } To use this, one writes something like this: /* Java */ class LaunchMissilesAction extends Action { Missiles m; LaunchMissilesAction(Missiles m) { this.m = m; } void invoke(ParseResults results) { m.launch(); } } ... Action a = new LaunchMissilesAction(missiles); Parser p = new ActionParser(otherParser, a); p.parse(input); The constructor argument missiles takes the place of a free variable in a closure. The closure itself has been replaced with an object from an ad hoc class, just as in Danvy and Nielsen's formulation, the closure is replaced with a synthetic data object that holds the values of the free variables. The invoke method plays the role of fake_apply . Now, it's not a particularly interesting observation that this can be done. The interesting part, I think, is that this is what Java programmers actually do. And also, perhaps, that Danvy and Nielsen didn't mention it in their paper, because I think the technique is pretty widespread.

[Other articles in category /prog] permanent link

