Let's demonstrate problems with map , reduce , amap , and areduce .

( require ' [ uncomplicate.commons.core :refer [ double-fn ] ] ' [ uncomplicate.fluokitten.core :refer [ fmap! fmap fold foldmap ] ] )

( def arr ( double-array [ 1 2 3 ] ) )

map returns a sequence:

( map inc arr )

2.0 3.0 4.0

The array is unchanged. This is normally desired in Clojure, but primitive arrays are explicitly built for mutability and speed, and sometimes we need to mutate them.

( get arr 0 )

1.0

As for the speed, let's measure mapping and reduction of an array of 100.000 elements with criterium. The canonical example is dot product:

\begin{gather*} \vec{x} = [x_1, x_2,\ldots, x_n]\\ \vec{y} = [y_1, y_2,\ldots, y_n]\\ \vec{x} \cdot \vec{y} = \sum_{i=1}^n x_i y_i \end{gather*}

So, create two large arrays and create a function for dot product:

( def arr-a ( double-array ( range 100000 ) ) ) ( def arr-b ( double-array ( range 100000 ) ) ) ( defn dot-seq [ xs ys ] ( reduce + ( map * xs ys ) ) )

;; (quick-bench (dot-seq arr-a arr-b)) ( dot-seq arr-a arr-b )

3.3332833335E14

Execution time mean : 10.074604 ms

It's not that fast…

Rich Hickey knew that this is a problem, and he created macros amap and areduce to help us with that:

( defn dot-areduce [ xs ys ] ( areduce ^doubles xs idx res 0.0 ( + res ( * ( aget ^doubles xs idx ) ( aget ^doubles ys idx ) ) ) ) )

;; (quick-bench (dot-areduce arr-a arr-b)) ( dot-areduce arr-a arr-b )

3.3332833335E14

100 times faster than the seq way!

Execution time mean : 94.521934 µs

Much faster, but more verbose. To avoid reflection penalty, we had to sprinkle the code with the ^doubles typehints. This means that our function can work only with doubles, and we will have to create a separate version for floats, longs, ints etc. areduce and map had to be macros, since working with primitives is explicit.