plists is a drop-in replacement for the Erlang module lists, making most list operations parallel. It can operate on each element in parallel, for IO-bound operations, on sublists in parallel, for taking advantage of multi-core machines with CPU-bound operations, and across erlang nodes, for parallizing inside a cluster. It handles errors and node failures. It can be configured, tuned, and tweaked to get optimal performance while minimizing overhead.

Almost all the functions are identical to equivalent functions in lists, returning exactly the same result, and having both a form with an identical syntax that operates on each element in parallel and a form which takes an optional “malt”, a specification for how to parallize the operation.

fold is the one exception, parallel fold is different from linear fold. This module also include a simple mapreduce implementation, and the function runmany. All the other functions are implemented with runmany, which is as a generalization of parallel list operations.

plists makes it easy to parallize list operations. Suppose you have a list of Erlang nodes, and want to know which are actually up:

NodeList = [apple@laptop, orange@desktop], UpNodes = lists:filter(fun (Node) -> % net_adm:ping returns pong if it can connect % to a node, pang otherwise net_adm:ping(Node) =:= pong end, NodeList),

This works. It goes over every element in the node list, and remove those that don’t respond. However, it checks only one node at a time, and even if a node is up it could take a while to respond. If a node is down, you sit waiting for a timeout, you sit waiting on every single down node in its turn.

UpNodes2 = plists:filter(fun (Node) -> % net_adm:ping returns pong if it can connect % to a node, pang otherwise net_adm:ping(Node) =:= pong end, NodeList),

Almost exactly the same, except for two differences. We are storing the result in UpNodes2, and we used plists:filter instead of lists:filter. plists:filter spawns a process for every single node we want to check, and takes no longer than the longest response, or the timeout if there is a downed node.

This is the easiest way to use plists. Simple replace your call to a function in lists with a call to the corresponding method in plists. plists operates on each element in parallel.

For more advanced use invoke the plists method with an optional parameter, the malt. The malt describes how to parallize the list operation. Suppose we have a huge list of numbers and we need to find the square root and the factors of each one. Suppose we also have a dual-core machine we want to use to full effect.

HugeList = lists:seq(1, 100000), SquareRoots = plists:map(fun math:sqrt/1, HugeList, {processes, 2}), Factors = plists:map(fun find_factors/1, HugeList, 10),

The last parameter on the plists:map call is the malt. For computing square roots, we used a malt of {procesess, 2}. This malt caused the huge list to be split into two sublists, each of which was processed by its own process, allowing both cores to be used for finding sqrts at once. This would not have been a good way to find the factors however, because the time it takes to compute factors depends on the size of the number. Bigger numbers take longer to factor, and if one processes gets small numbers and the other large numbers, the one with the smaller numbers would finish first, leaving the processes with bigger numbers to keep working on one core. Our list is sorted, so we can be certain that if we split it in half that would be the case. Instead we use the malt 10, which splits the huge list into 10-element sublists, each processed by its own process. If it turns out factoring uses large amount of memory, and so we don’t want to processes each sublist at once, we could use the malt [10, {processes, 2}]. The huge list is split into 10-element sublists, but plist is careful to run only two processes at once. If we wanted to compute with an erlang cluster we could use the malt [10, {nodes, [{apple@laptop, 2}, {orange@desktop, 3}]}]. This would run five processes at once, two on apple and three on orange.

Just because we started many parallel processes doesn’t mean we have to wait for them all to finish. If any error occurs, plists prematurely terminates all processes and returns, a feature we can [ab]use to find the Node in our NodeList we have the smallest ping time to.

NodeList = [apple@laptop, orange@desktop], try plists:foreach(fun (Node) -> % net_adm:ping returns pong if it can connect % to a node, pang otherwise case net_adm:ping(Node) of pong -> exit({closest, Node}); pang -> ok end end, NodeList) of ok -> exit(all_nodes_dead) catch exit:{closest, Closest} -> Closest end, % Closest is now bound to the closest node.

This pings all nodes in parallel, but as soon as one responds it terminates and returns with exit. We catch the exit and recover the closest node.

This introduction has only covered the most basic features plists offers. Complete documentation is available at http://freeyourmind.googlepages.com/plists.html, and the project page with downloads and source repository on code.google.com.

Plists shows off not only Erlang’s powerful concurrency, but also how ready functional programming is for concurrent programming. Because operations are already abstracted, it is easy to change how they operate without changing their meaning. Changing Erlang code to work with plists is very easy, optimising malts is a small extra step. Writing plists itself was easy. But parallelizing even a single loop in C would cause enormous code expansion, and there would still be no mechanism to allow all loops to be optimized. Higher-order functions encourage code reuse of code solely devoted to code flow, an absolutely necessary feature for building real concurrent software.