It’s always good practise to define named constants rather than embedding “magic numbers” in your code.

If you are going to do this, then there are some important tricks to make sure that you get the best possible performance out of your constants.

It’s easy enough to define a value in Clojure:

(def foo 10)

However there is a slight issue in doing this – Clojure top-level vars are accessed at runtime and store “boxed” values which results in performance overhead whenever they are accessed. A quick benchmark:

(time (dotimes [i 1000000] (Math/abs foo))) => "Elapsed time: 3032.741902 msecs"

Ouch! That’s pretty slow for a simple maths function.

As it happens, we’re getting hit by reflection here – we defined foo but didn’t specify what type it is. So the compiler has to do some reflection before figuring out how to call Math/abs

We can fix the reflection with an explicit cast:

(time (dotimes [i 1000000] (Math/abs (long foo)))) => "Elapsed time: 16.226857 msecs"

Much more respectable. As is commonly the case, the biggest performance win in Clojure comes from avoiding reflection.

But we can do even better – Clojure provides the :const metadata tag to tell the compiler to truly treat a value as a constant and inline it wherever it is used. Let’s try this:

(def ^:const bar 10) (time (dotimes [i 1000000] (Math/abs bar))) => "Elapsed time: 5.629187 msecs"

Noticeably faster again. Note that we didn’t even need to insert a cast here: because the compiler has inlined the constant, it can see that it is a Long and the typecast can be avoided.

For comparison, we can see that this version is effectively as fast as using a let-bound local variable:

(time (dotimes [i 1000000] (let [baz 10] (Math/abs baz)))) => "Elapsed time: 5.586289 msecs"

These are only trivial micro-benchmarks but that’s still about a 550x improvement in performance from the original version.

The lessons here:

Have “mechanical sympathy” – understand what the compiler is doing and help it out where needed.

– understand what the compiler is doing and help it out where needed. Do targeted benchmarks to measure if you actually have a problem. They don’t need to be particularly scientific – all you need to do is flush out the places where you have unexpectedly slow performance.

to measure if you actually have a problem. They don’t need to be particularly scientific – all you need to do is flush out the places where you have unexpectedly slow performance. Avoid reflection! If in doubt, put in a primitive cast.