The common advice about macros is that they should emit as little code as possible and delegate to ancillary functions as soon as possible. Here is an example from clojure.java.jdbc :

( defmacro transaction [ & body ] ` ( transaction* ( fn [] ~@body )))

I still think this is good advice but it has unintended consequences. The problem with this piece of code is that all closed-over objects in body are going to be retained longer than expected, longer than they would have been retained if the macro had emitted all the logic implemented in transaction* instead of delegating to it. (See this discussion as an example of issues created by such code.)

The closure object has references to all closed-over objects and since a closure can be called many times, it can’t get rid of them. So the only time where they are going to be garbage collectible is once the closure itself becomes collectible… and a closure can’t be collected while it’s executing.

Helpfully there’s a low-level feature for that:

( defmacro transaction [ & body ] ` ( transaction* ( ^:once fn* [] ~@body )))

It instructs the compiler that the closure is one-shot and that closed-over references should be cleared, thus allowing referenced objects to be garbage collected before the closure returns.

This problem is not specific to macros but can easily be solved in most cases: the closure is an implementation detail and the macro writer knows enough about its life-cycle to fix it. However any regular closure (fn or reify) is going to prevent closed-overs to be garbage-collected while one of its (java) methods is running because the closure is referenced by the stack.

During the last LambdaNext workshop a delegate stumbled on such a case while experimenting with reducers (and incidentally it made me understand a memory issue I worked around last year):

=> ( time ( reduce + 0 ( map identity ( range 1 e8 )))) "Elapsed time: 5729.579 msecs" 4999999950000000 => ( time ( reduce + 0 ( r/map identity ( range 1 e8 )))) ;; Interrupting... Expression was interrupted: null

(Depending on your memory settings, you may have to tweak the length of the range to exhibit the problem; more details here)

If one modifies reducers to not use (java) methods but external extensions:

( in-ns 'clojure . core . reducers ) ( defrecord Folder [ coll xf ]) ( defn folder "Given a foldable collection, and a transformation function xf, returns a foldable collection, where any supplied reducing fn will be transformed by xf. xf is a function of reducing fn to reducing fn." { :added "1.5" } ([ coll xf ] ( Folder . coll xf ))) ( extend-type Folder clojure . core . protocols/CollReduce ( coll-reduce [ fldr f1 ] ( clojure . core . protocols/coll-reduce ( :coll fldr ) (( :xf fldr ) f1 ) ( f1 ))) ( coll-reduce [ fldr f1 init ] ( clojure . core . protocols/coll-reduce ( :coll fldr ) (( :xf fldr ) f1 ) init )) CollFold ( coll-fold [ fldr n combinef reducef ] ( coll-fold ( :coll fldr ) n combinef (( :xf fldr ) reducef ))))

Then the problem disappears:

( in-ns 'user ) => ( time ( reduce + 0 ( r/map identity ( range 1 e8 )))) "Elapsed time: 4437.012 msecs" 4999999950000000

This is because the protocol methods is not a java method of the reducer object anymore and thus it can be reclaimed while the (protocol) method is executing.

So next time you have a memory issue, look for closures tying the life-cycle of their closed-overs to theirs!