The first step is to translate our code as is to Common Lisp. We’re going to punt entirely on the question of laziness for this post, though we might take it up in the future.

Here’s my first attempt. We have to start with our source of random times. In Clojure, we had:

(def times (iterate #(+ % (rand-int 1000)) 0))

Since I’m going to benchmark performance, we’ll need to set a limit to the number of events (times) we’ll process. So, in Common Lisp:

(defun time-sequence (n) (loop repeat n for y = 0 then (+ y (random 1000)) collect y))

where n is the number of event times to process. Here and throughout what follows, I’m using the loop macro, which is a Swiss army knife for iteration with a somewhat strange, non-Lispy, but readable syntax.

We also want to set our random state so as to guarantee different results each time we execute our simulation:

(setf *random-state* (make-random-state t))

My first attempt in Common Lisp looked like this benchmarking snippet:

(timing (length (->> 1000000 time-sequence (partition-n 8 1) (mapcar (juxt #'car (compose #'car #'last) #'identity)) (mapcar #'(lambda (l) `(,(- (cadr l) (car l)) ,(caddr l)))) (remove-if-not #'(lambda (l) (< (car l) 1000))))))

This is less lovely than the Clojure, partly because of all the hash-quoting of functions (Common Lisp is a Lisp-2, and Clojure is a Lisp 1). ->> is from the arrow-macros library, and compose is from the cl-utilities library. The other Clojure-ish functions, partition-n (a replacement for Clojure’s partition , whose name collides with an entirely different function in Common Lisp) and juxt are from cl-oju , a small library I’ve been writing for those still-frequent times when I want a Clojure function or idiom in Common Lisp. (The library ignores laziness for now, since I haven’t needed it yet.)

timing is a macro I adapted from the Common Lisp Cookbook, which captures both the results of a computation and the elapsed CPU time used (not wall clock time) in msec:

(defmacro timing (&body forms) (let ((run1 (gensym)) (run2 (gensym)) (result (gensym))) `(let ((,run1 (get-internal-run-time)) (,result (progn ,@forms)) (,run2 (get-internal-run-time))) `(duration ,(- ,run2 ,run1) msec... result ,,result))))

For now, result is simply the number of eight-fold time clusters occuring within 1000 units of time. The execution time was roughly a second for the first few times I ran this:

'(DURATION 1045 MSEC... RESULT 235) '(DURATION 1554 MSEC... RESULT 201) '(DURATION 827 MSEC... RESULT 164)

This already processed events at 1 MHz, roughly 4x faster than the Clojure speed of 250kHz.

Later, however, when I revisited the code, I noticed it ran significantly faster:

'(DURATION 435 MSEC... RESULT 193) '(DURATION 279 MSEC... RESULT 189) '(DURATION 205 MSEC... RESULT 177) '(DURATION 601 MSEC... RESULT 180)

This is 2.6 Mhz, 10x the Clojure code speed. It took me a while to figure out that I had changed the SBCL heap size in order to handle larger arrays for testing (this is one area where laziness would help!). The larger heap size was making the code run faster.

It should be said at this point that I made no attempt in my original blog post to optimize the Clojure code for performance. Nevertheless, I think it’s interesting that my first attempt to write the same algorithm in Common Lisp, using roughly the same idioms, performed so much faster.