Functional Programming (FP) By Any Other Name…

Don't worry, this is not YAMA (yet another monad article)! Instead, I want to talk about a library I've recently released that offers a helpful twist on typical functional programming ("FP") operations (like map(..) , compose(..) , etc).

Before we jump in: if you're like me and have tried to understand FP (and how to apply it to JavaScript), only to be frustrated and intimated by crazy terminology like "functors" or fancy notation like L ::= x | (L L) | (λx.L) , you might want to check out my latest book, Functional-Light JS (which you can read for free online!).

My book has a very different take; it approaches FP informally, from the ground-up, without being as heavy on terminology, and relies on almost no notation. The goal is to pragmatically explain the important fundamental concepts in ways you can actually use in your programs.

Note: From here on I'm going to expect you're familiar with ES6 features like ... spread and destructuring. Still fuzzy on those? No worries, I wrote a book on that, too! Check out You Don't Know JS: ES6 & Beyond, especially Chapter 2.

The Problem

There's already plenty of great FP libraries in JS, so why did I have the idea to build a new one!? Let me explain my motivations. Bear with me, because I want you to fully understand them to get why we need YAFPL. :)

Let's start first by looking at some code which illustrates one of my many frustrations as I've been learning and trying to work more with FP in my JavaScript. I'm going to use Ramda for this comparison, but any ol' regular FP-JS library will do:

function lowercase(v) { return v.toLowerCase(); } function uppercase(v) { return v.toUpperCase(); } var words = ["Now","Is","The","Time"]; var moreWords = ["The","Quick","Brown","Fox"]; var f = R.map( uppercase ); f( words ); // ["NOW","IS","THE","TIME"] f( moreWords ); // ["THE","QUICK","BROWN","FOX"]

As with all methods in Ramda, R.map(..) is curried, which means that even though it expects 2 arguments, we can call it with just uppercase , making a more specialized f(..) function that's then waiting for an array to map over. That lets us then call f(..) with different arrays, uppercasing each value in them, respectively.

What you may not realize is that inherently, the order of these arguments matters. R.map(..) expects the mapper function first and then the array. In this case, that's convenient for us because we want to specialize it in that sequence (mapper function first, array(s) later).

But what if we need to specialize in a different sequence (array first, mapper function later). This is possible, but takes a little extra work:

var p = R.flip( R.map )( words ); p( lowercase ); // ["now","is","the","time"] p( uppercase ); // ["NOW","IS","THE","TIME"]

We want to specify words first, making a p(..) that later takes a mapper function. Our specialization is with the second argument instead of the first.

To accomplish this, we have to R.flip(..) the R.map(..) function. flip(..) makes a function wrapper that swaps the first two arguments when passing to the underlying function. By flipping the argument order of R.map(..) , it now expects the array first, and the mapper function second.

In other words, to work with standard FP methods across any of the FP libraries, you have to remember their argument order -- keep those docs handy! -- and if it happens to be in an inconvenient order, you're stuck doing this juggling. On more than one occasion, I've had to flip a method, pass in an argument, flip it again to pass in another argument, etc. All that juggling can quickly get out of hand!

Another frustration that arises from positional arguments is when you need to skip one (probably because it has a default you want to fall back on). For this example, I'm going to use lodash/fp :

function concatStr(s1,s2) { return s1 + s2; } var words = ["Now","Is","The","Time"]; _.reduce( concatStr, _, words ); // NowIsTheTime _.reduce( concatStr, "Str: ", words ); // Str: NowIsTheTime

The _.reduce(..) function expects arguments in this order: reducerFunction , initialValue , arr . The common understanding of reduce(..) in JS is that if you don't want to provide an initialValue , it doesn't just default to some magic empty value, but rather changes the behavior of the operation itself. Basically, it starts the reduction with the second element in the array, using the first element as the initialValue ; this results in overall one less call to the reducer function ( concatStr(..) ).

Unfortunately, JS doesn't let us just omit an argument in a call list, like _.reduce( concatStr,, words ) . That would be cool, but no such luck. Instead, awkwardly, we have to pass a placeholder. Lodash lets us use _ as the placeholder by default, but in general, you typically have to use undefined .

Tip: There is a way to use a syntactic trick to avoid needing the placeholder in a normal JS function call: foobar( ...[1,2,,4] ) . What we do is use an array literal, which does allow "ellision" (skipping a value), and then we spread it out using the ES6+ ... spread operator. foobar(..) here would receive arguments 1 , 2 , undefined , and 4 its first four parameter positions. I'm not sure if that hoop jumping is any better (and it may have some perf downsides!).

In any case, juggling argument order and jumping through hoops to skip arguments at the call site is a common frustration in JS. It just happens to be a rather acute pain in FP as you end up needing to use those API methods in different ways more often than with just normal application functions.

The Solution: Named Arguments

Some languages have a syntax for naming arguments at the call site (not just naming parameters in the function declaration). For example, in Objective-C:

[window addNewControlWithTitle:@"Title" xPosition:20 yPosition:50 width:100 height:50 drawingNow:YES];

Here, you're calling the addNewControlWithTitle(..) function, and telling the system which parameter each value should be applied to, regardless of what order they may be listed in that function's declaration.

The benefit of named arguments is that you control at the call site which order you want to list arguments, and you also can just not list one if you don't want to pass a value for it. The tradeoff is that you have to remember what the parameters are called. Typically, languages and packages will adopt standardized naming conventions to help the parameter names be more intuitive and memorable.

Let me just say, this is not an either/or situation in my mind, in terms of code readability. There are times the positional arguments are more preferable, and clearly times when named arguments are more preferable. Ideally, a language would let you pick at the call site as you desire.

Unfortunately, JS does not have named arguments. However, we do have a pattern that gives us pretty much all the benefits of named arguments. For example:

function foo(x,y = 2,z) { console.log( x, y, z ); } function bar({ x, y = 2, z }) { // <--- parameter object destructuring console.log( x, y, z ); } foo( 1, undefined, 3 ); // 1 2 3 bar( {z:3, x:1} ); // 1 2 3

Note: Typically you will want a bar(..) style function declaration to look like: function bar({ x, y = 2, z} = {}) { .. } . That = {} parameter default means the bar(..) function degrades gracefully if called without an object at all.

With foo(..) we're using traditional positional arguments style, including the middle one ( y ) having a default. With bar(..) however, we're using the JS named-arguments idiom. First, we use parameter object destructuring in the parameter list. That essentially means we're declaring that we expect bar(..) to always be called with a single object as its argument. That object's properties are then destructured to be interpreted as the function's actual individual arguments, x , y , and z ; again, y also has a default.

The call site for foo(..) and bar(..) differ, too. For bar(..) , we pass in an object with properties instead of individual values with an undefined as positional placeholder. The object-argument can list properties (named arguments) in any order, and omit any that it doesn't want to specify. Nice!

Adaptation

My personal rule of thumb is that I now prefer to define any function that takes 3 or more arguments (especially if one or more have defaults!) with the named-arguments style. But that's only helpful when I'm in control of the function declaration and can make that decision.

What if I have a function like R.map(..) (or any other normal function in the application!) but I want to use named arguments at the call site?

To do so, we need to adapt a positional-arguments style function to be named-arguments style. Let's imagine such a helper for that; we'll call it apply(..) :

function apply(fn,props) { return function applied(argsObj) { // map properties from `argsObj` to an array, // in the order of property names in `props` var args = [], i = 0; for (let prop of props) { args[i++] = argsObj[prop]; } return fn( ...args ); }; }

Since objects are fundamentally unordered, we pass a props array which lists the property names in the order we want them to map to the positional arguments of the underlying function.

Let's use this utility now:

var map = apply( R.map, ["fn","arr"] ); map( {arr: words, fn: lowercase} ); // ["now","is","the","time"]

OK, sorta cool, huh?

Unfortunately, the resulting map(..) is no longer usefully curried, so we can't really take advantage of this capability in any interesting way. Wouldn't it really be cool if we could do:

var map = someSuperCoolAdapter( R.map, ["fn","arr"] ); var f = map( {fn: uppercase} ); f( {arr: words} ); // ["NOW","IS","THE","TIME"] f( {arr: moreWords} ); // ["THE","QUICK","BROWN","FOX"] var p = map( {arr: words} ); p( {fn: lowercase} ); // ["now","is","the","time"] p( {fn: uppercase} ); // ["NOW","IS","THE","TIME"]

To do that, we'd probably need an apply(..) that was smart enough to automatically curry across multiple named arguments calls. I won't show how we'd do that, for brevity sake. But it's an interesting exercise for the reader. Another wrinkle: is there any way this adapter could figure out what property names to use by default? It is possible, if you parse the function definition (string regex parsing!). Again, I'll leave that for the reader to explore!

What about adapting the other direction? Say we have a named-arguments style function, but we just want to use it as a normal positional-arguments style function. We need a companion utility that does the inverse of apply(..) ; we'll call this one unapply(..) :

function unapply(fn,props) { return function unapplied(...args) { // map `args` values to an object, // with property names from `props` var argsObj = {}, i = 0; for (let arg of args) { argsObj[ props[i++] ] = arg; } return fn( argsObj ); }; }

And using it:

function foo({ x, y, z } = {}) { console.log( x, y, z ); } var f = unapply( foo, ["x","y","z"] ); f( 1, 2, 3 ); // 1 2 3

Same problem here with currying. But at least we can now envision how armed with these two utilities, we can interoperate with positional-arguments style and named-arguments style functions, as we see fit!

Reminder: all of this is entirely separate from whether we're dealing with an FP library or not. These concepts apply (pun intended) with any of your functions in your application. You can now freely define functions with either style as appropriate, and choose at the call site how you want to interface with a function. That's very powerful!

FP Library Already?

Good grief, that was a really long preamble to ostensibly the main topic of this article, which is supposed to introduce a new FP library I've released. At least you understand why I wrote it. So now let me get to it!

When conceiving of apply(..) / unapply(..) and playing around with them, I had this thought: what if I had a whole FP library where all the methods were already in named-arguments style? Of course, that library can also provide the apply(..) / unapply(..) helpers to make interop easier. And, for convenience, shouldn't that library also just export all the same methods (in a separate namespace) using the standard positional-arguments style? Ultimate choice in one FP lib, right!?

That's what FPO (pronounced "eff-poh") is all about. FPO is a JS library for FP, but its core methods are all defined in the named-arguments style. As is common with FP libraries, all the methods are also curried, so you can provide arguments in whatever order and sequence you need! And FPO.std.* has all the positional-arguments style methods if you want them.

Want to jump straight to the docs?

Core API -- named-arguments style methods ( FPO.map(..) , etc)

Standard API -- standard positional-arguments style methods ( FPO.std.map(..) , etc). These mostly work like their Ramda counterparts.

Quick Examples

// Note: these functions now expect named-arguments style calls function lowercase({ v } = {}) { return v.toLowerCase(); } function uppercase({ v } = {}) { return v.toUpperCase(); } var f = FPO.map( {fn: uppercase} ); f( {arr: words} ); // ["NOW","IS","THE","TIME"] f( {arr: moreWords} ); // ["THE","QUICK","BROWN","FOX"] var p = FPO.map( {arr: words} ); p( {fn: lowercase} ); // ["now","is","the","time"] p( {fn: uppercase} ); // ["NOW","IS","THE","TIME"]

FPO.map(..) is named-arguments style, and already curried. Very easy to use however you want!

As you'll notice, it expects its mapper function to also follow named-arguments style. If you instead want to pass a standard-style mapper function, just apply(..) it first:

function firstChar(v) { return v[0]; } var f = FPO.apply( {fn: firstChar} ); // <-- auto detects `props`! FPO.map( {fn: f, arr: words} ); // ["N","I","T","T"]

Applying and currying are easy to mix together in your own code, too:

function foo(x,y,z) { console.log( x, y, z ); } var f = FPO.apply( {fn: foo} ); var g = FPO.curry( {fn: f, n: 3} ); g( {y: 2} )( {x: 1} )( {z: 3} ); // curried named-arguments! // 1 2 3

Unapplying works similarly:

function foo({x, y = 2, z} = {}) { console.log( x, y, z ); } var f = FPO.unapply( {fn: foo, props: ["x","y","z"]} ); f( 1, undefined, 3 ); // 1 2 3

But don't forget easy skipping of named arguments for defaults:

function foo(x,y = 2,z) { console.log( x, y, z ); } var g = FPO.curry( { fn: FPO.apply( {fn: foo} ), n: 2 // use `2` here for currying-count to allow skipping } ); g( {z: 3} )( {x: 1} ); // 1 2 3

Composition of named-arguments style functions works, too:

function plus2({ v } = {}) { return v + 2; } function triple({ v } = {}) { return v * 3; } function decrement({ v } = {}) { return v - 1; } FPO.map( { arr: [1,2,3,4,5], fn: FPO.compose( {fns: [ decrement, triple, plus2 ]} ) } ); // [8,11,14,17,20] FPO.map( { arr: [1,2,3,4,5], fn: FPO.pipe( {fns: [ plus2, triple, decrement ]} ) } ); // [8,11,14,17,20]

Lastly, the standard positional-argument style methods are still available if you want them:

function concatStr(s1,s2) { return s1 + s2; } FPO.std.reduce( concatStr, undefined, words ); // NowIsTheTime

Note: BTW, if you don't like typing FPO. or FPO.std. in front of all your methods, just alias those objects to whatever you prefer, like var F = FPO, S = FPO.std; . Eventually, FPO will even support ES6 modules style imports where you'll be able to import only the methods you want, into your own lexical scope!

That's a quick overview of what you can do with FPO. Go check out the README overview and API Docs for further information!

Parameter Naming Conventions

FPO has a fairly straightforward approach for parameter naming conventions, which should be reasonable to intuit and learn. A glimpse:

When a method expects a function, the named argument is fn .

. When a method expects a number, the named argument is n .

. When a method expects a value, the named argument is v .

. ...

The full list of rules are listed here.

Wrap(..) ing Up

OK, that's FPO.

I'm not trying to compete with libraries like Ramda or lodash/fp. They're great. I just wanted to provide some additional flexibility. And in my FP coding so far, I'm finding the tradeoffs and flexibility to be a nice improvement!

I hope you find FPO useful! Let me know in the comments, or chime in on the repo issues if you have suggestions or questions.