In an earlier post we were looking at how to calculate an average using JavaScript’s array method. And in that article we ran into a dilemma. On the one hand, we could build our solution out of small, simple functions. But that meant doing many passes over the one array. On the other hand, we could do everything in a single pass. But that meant creating a hideously complex reducer. We were forced to choose between elegance and efficiency.

In the same article though, I hinted at another way. A solution that would give us the elegance of using small, simple functions. But also the efficiency of doing our processing in a single pass through the array. What is this magical solution? It’s a concept called a transducer.

Transducers are very cool. They give us a lot of power. But they are also a bit abstract. And that makes them hard to explain. So I could write an epic post explaining where transducers came from and how they work…. But someone else has already done it. Eric Elliott has written a lengthy article that explains transducers in depth. So rather than repeat his work, I’m going to encourage you to read that.

So what’s the point of this article then? If Mr Elliott explains transducers so well, what else is left to say? Well, two things:

Even after reading Mr Elliott’s article twice, I still found it tricky to get my head around. So I thought I’d have a go at explaining how I understand them; and I thought it might be instructive to apply transducers to a specific problem. That way, we can see them in action and make things concrete. So, in this article, I’ll solve the same problem from my previous article.

Transducers are hard. It may take a couple of attempts to get your head around them. So if you’re still confused after reading Mr Elliott’s article, maybe this one might help you along the way.

A practical application of transducers

So, let’s refresh our memory on the problem we’re trying to solve. We have some data about Victorian-era slang terms:

const victorianSlang = [ { term: 'doing the bear', found: true, popularity: 108, }, { term: 'katterzem', found: false, popularity: null, }, { term: 'bone shaker', found: true, popularity: 609, }, { term: 'smothering a parrot', found: false, popularity: null, }, { term: 'damfino', found: true, popularity: 232, }, { term: 'rain napper', found: false, popularity: null, }, { term: 'donkey’s breakfast', found: true, popularity: 787, }, { term: 'rational costume', found: true, popularity: 513, }, { term: 'mind the grease', found: true, popularity: 154, }, ];

We’d like to find the average of all the entries that have a popularity score. Now, one way to solve the problem is using .filter() , .map() and .reduce() . It might look something like this:

// Helper functions // --------------------------------------------------------------------------------- function isFound(item) { return item.found; }; function getPopularity(item) { return item.popularity; } // We use an object to keep track of multiple values in a single return value. function addScores({totalPopularity, itemCount}, popularity) { return { totalPopularity: totalPopularity + popularity, itemCount: itemCount + 1, }; } // Calculations // --------------------------------------------------------------------------------- const initialInfo = {totalPopularity: 0, itemCount: 0}; const popularityInfo = victorianSlang.filter(isFound) .map(getPopularity) .reduce(addScores, initialInfo); // Calculate the average and display. const {totalPopularity, itemCount} = popularityInfo; const averagePopularity = totalPopularity / itemCount; console.log("Average popularity:", averagePopularity);

The problem with this approach is that we have to traverse the array three times:

Once to filter out the un-found items; Again to extract the popularity scores; And once more to calculate the total.

This isn’t so bad, except that we’re creating at least two intermediate arrays. These could potentially take up a lot of memory (if we had a larger data set).

But the good thing about this approach is that it breaks the task down into three easy sub-tasks.

Another way to think about transducers

Now, how do we get from our problem to transducers? To make the transition easier, let’s try a thought experiment. Imagine that someone with a lot of power outlawed the use of .filter() , .map() and .flatMap() in JavaScript. It’s a silly thought experiment, I know, but humour me. Imagine you couldn’t use the built in .filter() or .map() method. And neither could you write your own versions using for-loops. What would we do?

This situation wouldn’t phase us too much, because we know that we can use .reduce() to do the job of both .filter() and .map() . Here’s how that might look:

// Helper functions // --------------------------------------------------------------------------------- function isFound(item) { return item.found; }; function getPopularity(item) { return item.popularity; } function filterFoundReducer(foundItems, item) { return isFound(item) ? foundItems.concat([item]) : foundItems; } function mapPopularityReducer(scores, item) { return scores.concat([getPopularity(item)]); } // We use an object to keep track of multiple values in a single return value. function addScores({totalPopularity, itemCount}, popularity) { return { totalPopularity: totalPopularity + popularity, itemCount: itemCount + 1, }; } // Calculations // --------------------------------------------------------------------------------- const initialInfo = {totalPopularity: 0, itemCount: 0}; const popularityInfo = victorianSlang.reduce(filterFoundReducer, []) .reduce(mapPopularityReducer, []) .reduce(addScores, initialInfo); // Calculate the average and display. const {totalPopularity, itemCount} = popularityInfo; const averagePopularity = totalPopularity / itemCount; console.log("Average popularity:", averagePopularity);

Notice how we chain .reduce() three times there. We’ve converted our main calculation so that it uses only .reduce() . The imaginary ban on .filter() and .map() hasn’t stopped us. But if this ban were to continue, we might want to make life easier on ourselves. We could save some effort by creating functions for building reducers. For example, we could create one for making filter-style reducers. And we could build another for creating map-style reducers:

function makeFilterReducer(predicate) { return (acc, item) => predicate(item) ? acc.concat([item]) : acc; } function makeMapReducer(fn) { return (acc, item) => acc.concat([fn(item)]); }

Nice and simple, aren’t they? If we were to use them on our average calculation problem, it might look like this:

const filterFoundReducer = makeFilterReducer(isFound); const mapPopularityReducer = makeMapReducer(getPopularity);

But, so what? We’re not any closer to solving the average problem more efficiently. When do we get to the transducers? Well, as Mr Elliott says in his article, transducers are tools for modifying reducers. To put it another way, we can think of a transducer as a function that takes a reducer and returns another reducer. If we were to describe that with Haskell types, it might look something like this:

type Reducer = (a, b) => a transducer :: Reducer -> Reducer

What that means is: A transducer takes a reducer function as input, and transforms it in some way. We give it a reducer, and it gives us another reducer function back.

Now, we’ve just modified our average-calculating code so that it only uses reducers. No more .filter() and .map() . Instead, we have three separate reducers. So, we’re still traversing the array three times. But what if, instead of three reducers, we used transducers to combine them into one?

So we could, for example, take a reducer and modify it so that some items were filtered out. The first reducer still runs, but it just never sees some values. Or, we could modify a reducer so that every item passed to it was transformed or mapped to a different value. That is, every item is transformed before the reducer sees it. In our case, that might look something like this:

// Make a function that takes a reducer and returns a // new reducer that filters out some items so that the // original reducer never sees them. function makeFilterTransducer(predicate) { return nextReducer => (acc, item) => predicate(item) ? nextReducer(acc, item) : acc; } // Make a function that takes a reducer and returns a new // reducer that transforms every time before the original // reducer gets to see it. function makeMapTransducer(fn) { return nextReducer => (acc, item) => nextReducer(acc, fn(item)); }

Earlier, we made convenience functions for creating reducers. Now, instead, we’ve created convenience functions for changing reducers. Our makeFilterTransducer() function takes a reducer and sticks a filter in front of it. Our makeMapTransducer() function takes a reducer and modifies every value going into it. In our average calculation problem, we have a reducer function at the end, addScores() . We can use our new transducer functions to map and filter the values going into it. We would end up with a new reducer that does all our filtering, mapping, and adding in one step. It might look like this:

const foundFilterTransducer = makeFilterTransducer(isFound); const scoreMappingTransducer = makeMapTransducer(getPopularity); const allInOneReducer = foundFilterTransducer(scoreMappingTransducer(addScores)); const initialInfo = {totalPopularity: 0, itemCount: 0}; const popularityInfo = victorianSlang.reduce(allInOneReducer, initialInfo); // Calculate the average and display. const {totalPopularity, itemCount} = popularityInfo; const averagePopularity = totalPopularity / itemCount; console.log("Average popularity:", averagePopularity);

And now, we’ve managed to calculate our average in a single pass. We’ve achieved our goal. We are still building our solution out of tiny, simple functions. (They don’t get much simpler than isFound() and getPopularity() .) But we do everything in a single pass. And notice that we we were able to compose our transducers together. If we wanted, we could even string a bunch of them together with compose() . This is why smart people like Mr Elliott and Rich Hickey think they’re so interesting.

There’s a lot more to explore with transducers though. This is just one specific application. If you want to dive in and start using them in your projects, please take note of a few things first:

I’ve used non-standard function names in this article to try and make their purpose clear. For example, I use the argument name nextReducer , where Mr Elliott uses step . As a result, the solution here looks a bit uglier because of the long names. If you read Mr Elliott’s article, he uses more standard names and everything looks a bit more elegant. As Mr. Elliott suggests in his article, it’s (usually) best to use someone else’s transducer library. This is because the version written here has been simplified to help make the concepts clear. In practice, there’s several edge cases and rules to handle. A well written implementation will take care of that for you.

Transducers in Ramda

Speaking of well written implementations, Ramda has one built-in for processing arrays. I thought I’d show how our problem works because Ramda’s way of doing it is a little bit magical. So magical, in fact, that it’s hard to see what’s going on. But once you get it, it’s brilliant.

So, the thing that stumped me for quite a while is that with Ramda, you don’t need to make transducer factories. We don’t need makeFilterTransducer() or makeMapTransducer() . The reason is, Ramda expects you to use its plain ol’ filter() and map() functions. It does some magic behind the scenes and converts them into a transducer for us. And it does all the work of complying with the reducer rules for us as well.

So, how would we solve the sample problem with Ramda? Well, we would start by using the transduce() function. It takes four parameters:

The first is a ‘transducer’. But, as we mentioned, we just compose plain old Ramda utilities. Then, we pass a final reducer to transform. And then an initial value. And finally, the array to process.

Here’s how our solution might look:

import {compose, filter, map, transduce} from 'ramda'; // Our utility functions… function isFound(item) { return item.found; }; function getPopularity(item) { return item.popularity; } function addScores({totalPopularity, itemCount}, popularity) { return { totalPopularity: totalPopularity + popularity, itemCount: itemCount + 1, }; } // Set up our 'transducer' and our initial value. const filterAndExtract = compose(filter(isFound), map(getPopularity)); const initVal = {totalPopularity: 0, itemCount: 0}; // Here's where the magic happens. const {totalPopularity, itemCount} = transduce( filterAndExtract, // Transducer function (Ramda magically converts it) addScores, // The final reducer initVal, // Initial value victorianSlang // The array we want to process ); // And spit out the average at the end. const averagePopularity = totalPopularity / itemCount; console.log("Average popularity:", averagePopularity);

One thing to note here is that in compose() , I’ve written filter() first, then map() . This isn’t a mistake. It’s a quirk of how transducers work. The order you compose is reversed from the usual. So filter() is applied before map() . And this isn’t a Ramda thing either. It’s all transducers. You can see how it happens if you read the examples above (not the Ramda ones).

One final thing to point out: Transducers are not just limited to processing arrays. They can work with trees, observables (think RxJS) or streams (see Highland.js). Anything that has some concept of reduce() , really. And that’s kind of the dream of functional programming. We write tiny, simple functions like isFound() and getPopularity() . Then we piece them together with things like transduce() and reduce() . And we end up with powerful, performant programs.

So, to sum up, Transducers are great. But they can also be confusing. So if anything I’ve written here confused you, please send me a tweet and let me know. I’d love to hear about it so I and try and improve the explanation. And of course, if you found it useful/helpful, I’d love to hear about that too.