Introduction

In the first article in this series, I presented the case for open multi-methods. I also expressed my opinion that such a facility would extend C++ in a very useful and natural manner. I will now get more concrete and present a library that implements open multi-methods using the facilities provided by the latest revision of C++.

The library is called yomm11. It stands for "yorel's open multi-methods". It is free software, distributed under the Boost Public License. You can download a snapshot and track the development on GitHub. The documentation is available here.

What is the Matrix?

This installment will take, for the most part, the form of a tutorial. I will elaborate on the matrix example from the previous article. To recap, matrices fall into several sub-categories: ordinary, diagonal, tridiagonal, etc. The various operations on matrices should use the best algorithm considering the subtypes of the operands. Also, they should yield the most specific type, e.g. the sum of two diagonal matrices should be a diagonal matrix, not a plain matrix. If the exact types of both operands are known at compile time, templates and specializations can be used to this effect, resulting in a code that is both natural and efficient. Thanks to open multi-methods, the same can be achieved when the subtypes are known only at run time.

In this article, I will focus on matrix addition. Please note: the topic is multi-methods, not implementing an industrial strength matrix library - that would take an entire book. I will therefore indulge in many simplifications to keep the code compact and easy to read and explain.

The matrix class is merely a wrapper around a standard shared_ptr to an instance of class abstract_matrix :

class abstract_matrix; class matrix { public : matrix(shared_ptr<abstract_matrix> impl); const abstract_matrix& subtype() const { return *pm; } private : shared_ptr<abstract_matrix> pm; };

The classes that implement the various matrix subtypes derive from abstract_matrix : ordinary_matrix , diagonal_matrix , tridiagonal_matrix , etc. The various operations on matrices are implemented either as member operators (e.g. operator ~ , returning a transposed copy of a matrix) or as free operators (e.g. binary operator + , implementing addition). These operators are mere wrappers around open method calls.

In yomm11, methods are declared via the MULTI_METHOD macro. It takes the name of the method, followed by the return type and the argument list. Virtual arguments - i.e. the arguments whose dynamic type is used for method selection - are marked with virtual_ . virtual_ is a template that resides in the library's namespace yorel::multi_methods . Virtual arguments must be references to (possibly const ) objects. Thus operator + is implemented like this:

#include < yorel/multi_methods.hpp > using yorel::multi_methods::virtual_; MULTI_METHOD(add, matrix, const virtual_<abstract_matrix>& m1, const virtual_<abstract_matrix>& m2); inline matrix operator +( const matrix& m1, const matrix& m2) { return add(m1.subtype(), m2.subtype()); }

There is no restriction on the number of virtual arguments. The most typical cases are either one or two virtual arguments. Virtual and non-virtual arguments can be mixed in any order. Open method declarations can go in a header file; internally they are constant expressions yielding stateless function objects.

For the moment, the method has no associated implementation. If I were to call it now, an exception of type yorel::multi_methods::undefined would be thrown.

Soon I will add specializations to the add method, for example to handle the case where both arguments are diagonal matrices. But first, let us take a look at the abstract_matrix class hierarchy.

Preparing the Classes

The method dispatcher needs to know about the inheritance relationships between the matrix classes. It also needs to access data structures - specific to each class - to figure out which specialization to call. I will not go into the details of how this is achieved now. For the time being, it suffices to say that the mechanism is very similar to how virtual functions are implemented: each class has an associated table that is indexed by the methods declared in the class or in its parents.

It is possible to use the library with existing classes, without modifying them. Such classes are called "foreign" classes. They are declared by placing calls to the MM_FOREIGN_CLASS macro in an implementation file (or several):

MM_FOREIGN_CLASS(animal); MM_FOREIGN_CLASS(female, animal); MM_FOREIGN_CLASS(horse, animal); MM_FOREIGN_CLASS(mare, horse, female);

It is however preferable, whenever possible, to modify (or create) the classes to make them cooperate with the library. In this situation, method dispatch is very fast: calling an open method with a single virtual argument is almost as fast as calling a virtual function (it is just 1/3 slower on my computer). Calling a yomm11 multi-method with two virtual arguments beats double dispatch.

What do I mean by "cooperate"? Three things:

derive from class yorel::multi_methods::selector , directly or indirectly declare its base classes, if any - just like with foreign classes call MM_INIT() in all its constructors

Let us look at two examples - abstract_matrix and ordinary_matrix :

class abstract_matrix : public yorel::multi_methods::selector { public : MM_CLASS(abstract_matrix); abstract_matrix( int rows, int cols) : nr(rows), nc(cols) { MM_INIT(); } int rows() const { return nr; } int cols() const { return nc; } virtual double get( int r, int c) const = 0 ; virtual const vector<double>& elements(vector<double>&) const = 0 ; private : const int nr, nc; }; class ordinary_matrix : public abstract_matrix { public : MM_CLASS(ordinary_matrix, abstract_matrix); ordinary_matrix( int rows, int cols, const vector<double>& init); static matrix make( int rows, int cols, const vector<double>& init); virtual double get( int r, int c) const { return data[r * cols() + c]; } const vector<double>& elements(vector<double>&) const { return data; } private : vector<double> data; }; ordinary_matrix::ordinary_matrix( int rows, int cols, const vector<double>& init) : abstract_matrix(rows, cols), data(init) { MM_INIT(); }

Virtual function get returns an element given its row and column. Fishing elements one by one while paying the cost of a virtual function call is very inefficient, so this function should be either used when speed does not matter much, or as a last resort.

Virtual function elements returns all the elements in the matrix, flattened in a standard vector. An empty vector is provided for subclasses that do not store the flat representation. They can fill the vector and return a reference to it. ordinary_matrix does not need it, it simply returns a reference to its underlying vector.

Static function ordinary_matrix::make() creates an ordinary matrix from a flat vector of elements and wraps it in a matrix object.

At this point, I can already write a "specialization" for method add in terms of abstract_matrix 's interface. This is done via the BEGIN_SPECIALIZATION / END_SPECIALIZATION pair of macros:

BEGIN_SPECIALIZATION(add, matrix, const abstract_matrix& m1, const abstract_matrix& m2) { vector<double> result(m1.rows() * m1.cols()); vector<double> v1, v2; const vector<double>& elt1 = m1.elements(v1); const vector<double>& elt2 = m2.elements(v2); transform(elt1.begin(), elt1.end(), elt2.begin(), result.begin(), plus<double>()); return ordinary_matrix::make(m1.rows(), m1.cols(), result); } END_SPECIALIZATION;

This "specialization" handles in fact the most general case and relies solely on the abstract_matrix interface to perform its task. Note that the virtual_ specifier is not used in a specialization.

In a moment, I will add the diagonal_matrix subclass. At this point, however, it would be nice to be able to display a matrix, for example by writing it to an ostream . I also anticipate that I will want the exact type of the matrix to be displayed, be it to help with debugging or to demonstrate that adding two diagonal matrices does in fact yield a diagonal matrix - not an ordinary matrix that happens to have all its elements outside the diagonal set to zero.

It looks like a virtual function would be perfect for this task. A plausible implementation would go like this:

class abstract_matrix { public : virtual void write(ostream& os) const = 0 ; }; class ordinary_matrix : public abstract_matrix { public : virtual void write(ostream& os) const ; }; inline ostream& operator <<(ostream& os, const matrix& m) { m.write(os); return os; } void abstract_matrix::write(ostream& os) const { } void ordinary_matrix::write(ostream& os) const { os << " ordinary

" ; abstract_matrix::write(os); }

I will, however, use an open method with a single virtual argument to implement the streaming operator. Is it because I have fallen in love with my library? No. There are very good reasons to avoid a virtual function here.

The first reason is that the matrix is not best placed to decide how it should be represented in a stream. Do you remember the discussion about kicking the dog or dogging the kick in the previous article? Well this is the same argument again. In my little demo program, I want to display the exact type of the matrix, before the elements. This desire is both legitimate and very specific to the demo program. In other contexts, I may want to put all the elements on one line, preceded by the number of rows and columns (for example, to make it possible to reconstruct the matrix from the stream - in which case I will probably want to store the exact type as well). Or I may want to just display the data. Hey, maybe in yet another program I don't want to display matrices at all!

Which brings me to the second reason for avoiding the virtual function approach: it creates a dependency from the matrix component to the iostream library. And it is a hard dependency: any program linking in such a matrix class will automatically pull the iostream library with it - whether it writes matrices onto streams or not. This flies in the face of one of C++'s major design decisions: you should not pay for what you don't use. This may be a real problem if the program runs on a machine with scarce resources.

Finally, please note that the write virtual function does not need privileged access to the matrix' private parts. Yet it gets it, because virtual functions are member functions and are thus granted unchecked access to the class' internals. Soon someone may be accessing its private variables because of sloppiness or absent-mindedness, thus tying his code to details that may change later. Encapsulation is compromised and that is the price we have to pay in C++ to use polymorphism. It always amazes me that so many people bash the "friend" construct, claiming that it violates encapsulation - I for one believe that is helps tightening it - yet it seems that nobody realizes that virtual functions are much more crime-inducing.

For all these reasons, I implement the streaming behavior outside of the matrix hierarchy, by means of an open method:

MULTI_METHOD(write, void , ostream& os, const virtual_<abstract_matrix>& m); ostream& operator <<(ostream& os, const matrix& m) { write(os, m.subtype()); return os; } BEGIN_SPECIALIZATION(write, void , ostream& os, const abstract_matrix& m) { } END_SPECIALIZATION;

Instead of having matrix depend on the stream library, I now have an optional "write" functionality that depends on both matrix and iostreams - and can be linked in separately. This a very clean solution to the "seam" problem typically encountered in multi-tiered applications (with matrices as the business tier and iostreams as the presentation tier in this example).

In the virtual function approach, ordinary_matrix::write calls the write function in the base class. Open (multi-) methods have a similar facility: the "next" method. Inside a specialization, next(args) calls the next most specific specialization, and returns its result if it is not void. Or, in other words, it calls the specialization that would have been called if the current specialization had not existed. This works whether the method has one or several virtual arguments.

Thus the specialization for displaying ordinary matrices prefixed with their type looks like this:

BEGIN_SPECIALIZATION(write, void , ostream& os, const ordinary_matrix& m) { os << " ordinary

" ; next(os, m); } END_SPECIALIZATION;

I can now write a small helper function to demonstrate matrix addition:

void show_sum(matrix m1, matrix m2) { cout << m1 << " + " << m2 << " = " << m1 + m2 << endl; } int main() { yorel::multi_methods::initialize(); cout << setw( 8 ) << fixed << right << setprecision( 0 ); show_sum( ordinary_matrix::make( 3 , 3 , { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 }), ordinary_matrix::make( 3 , 3 , { 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 })); }

Output

ordinary 1 2 3 4 5 6 7 8 9 + ordinary 2 3 4 5 6 7 8 9 10 = ordinary 3 5 7 9 11 13 15 17 19

Note the call to initialize() : it sets up all the data structures necessary to dispatch methods. initialize() must be called at the beginning of the program, and whenever classes or methods are added - typically after dynamically loading a library.

This is still unexciting, as I have only one matrix subclass so far.

Adding a New Matrix Subtype

I am going to add a new matrix subclass now, optimized for diagonal matrices. The diagonal_matrix stores only the diagonal of the matrix; we know that all the other elements are zero. In addition to the virtual functions it must override, it also has a public function that returns the diagonal.

class diagonal_matrix : public abstract_matrix { public : MM_CLASS(diagonal_matrix, abstract_matrix); diagonal_matrix( int rows, const vector<double>& init); static matrix make( int rows, const vector<double>& init); virtual double get( int r, int c) const { return r == c ? data[r] : 0 ; } const vector<double>& elements(vector<double>&) const ; const vector<double>& diagonal() const { return data; } private : vector<double> data; }; diagonal_matrix::diagonal_matrix( int rows, const vector<double>& init) : abstract_matrix(rows, rows), data(init) { MM_INIT(); } matrix diagonal_matrix::make( int rows, const vector<double>& init) { return matrix(make_shared<diagonal_matrix>(rows, init)); }

At this point, the addition operator can already deal with diagonal matrices, using the fallback strategy of getting the elements of both matrices. That is good for the addition of an ordinary and a diagonal matrix, but unsatisfying if both operands are diagonal matrices: the operator will return a diagonal matrix but it will be represented like an ordinary matrix.

Solving this problem is just a matter of adding a specialization to the add method. I also specialize the write method:

BEGIN_SPECIALIZATION(add, matrix, const diagonal_matrix& m1, const diagonal_matrix& m2) { vector<double> result(m1.diagonal().size()); transform(m1.diagonal().begin(), m1.diagonal().end(), m2.diagonal().begin(), result.begin(), plus<double>()); return diagonal_matrix::make(result.size(), result); } END_SPECIALIZATION; BEGIN_SPECIALIZATION(write, void , ostream& os, const diagonal_matrix& m) { os << " diagonal

" ; next(os, m); } END_SPECIALIZATION;

And indeed, when I add to diagonal matrices, I get a new diagonal matrix:

show_sum( ordinary_matrix::make( 3 , 3 , { 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 }), diagonal_matrix::make( 3 , { 1 , 2 , 3 })); show_sum( diagonal_matrix::make( 3 , { 1 , 2 , 3 }), diagonal_matrix::make( 3 , { 2 , 3 , 4 })); ordinary 1 2 3 4 5 6 7 8 9 + diagonal 1 0 0 0 2 0 0 0 3 = ordinary 2 2 3 4 7 6 7 8 12 diagonal 1 0 0 0 2 0 0 0 3 + diagonal 2 0 0 0 3 0 0 0 4 = diagonal 3 0 0 0 5 0 0 0 7

Note that I did not need to change anything to either abstract_matrix or ordinary_matrix . Open (multi-) methods are not the only reason: I planned for extensibility all along. However, had I resorted on virtual functions (using the double dispatch pattern) to implement addition, I would have had no choice but to modify abstract_class .

What if a user of my matrix library needs a matrix type that I have not provided? No problem, he can add it from the outside, again because no class has any knowledge of its derived classes. For example, here is a tridiagonal matrix:

class tridiagonal_matrix : public abstract_matrix { public : MM_CLASS(tridiagonal_matrix, abstract_matrix); tridiagonal_matrix( int rows, const vector<double>& main, const vector<double>& upper, const vector<double>& lower); static matrix make( int rows, const vector<double>& main, const vector<double>& upper, const vector<double>& lower); virtual double get( int r, int c) const ; const vector<double>& elements(vector<double>&) const ; const vector<double>& main() const { return d1; } const vector<double>& upper() const { return d2; } const vector<double>& lower() const { return d3; } private : vector<double> d1, d2, d3; }; tridiagonal_matrix::tridiagonal_matrix( int rows, const vector<double>& main, const vector<double>& upper, const vector<double>& lower) : abstract_matrix(rows, rows), d1(main), d2(upper), d3(lower) { MM_INIT(); } matrix tridiagonal_matrix::make( int rows, const vector<double>& main, const vector<double>& upper, const vector<double>& lower) { return matrix(make_shared<tridiagonal_matrix>(rows, main, upper, lower)); } double tridiagonal_matrix::get( int r, int c) const { return c == r ? main()[r] : c == r + 1 ? upper()[r] : c == r - 1 ? lower()[c] : 0 ; } BEGIN_SPECIALIZATION(add, matrix, const tridiagonal_matrix& m1, const tridiagonal_matrix& m2) { int n = m1.main().size(); vector<double> main(n), upper(n - 1 ), lower(n - 1 ); transform(m1.main().begin(), m1.main().end(), m2.main().begin(), main.begin(), plus<double>()); transform(m1.upper().begin(), m1.upper().end(), m2.upper().begin(), upper.begin(), plus<double>()); transform(m1.lower().begin(), m1.lower().end(), m2.lower().begin(), lower.begin(), plus<double>()); return tridiagonal_matrix::make(main.size(), main, upper, lower); } END_SPECIALIZATION; BEGIN_SPECIALIZATION(write, void , ostream& os, const tridiagonal_matrix& m) { os << " tridiagonal

" ; next(os, m); } END_SPECIALIZATION;

I probably want to specialize addition for the case where one operand is a tridiagonal matrix and the other a diagonal matrix; in which case the result is a tridiagonal matrix:

BEGIN_SPECIALIZATION(add, matrix, const tridiagonal_matrix& m1, const diagonal_matrix& m2) { int n = m1.main().size(); vector<double> main(n); transform(m1.main().begin(), m1.main().end(), m2.diagonal().begin(), main.begin(), plus<double>()); return tridiagonal_matrix::make(main.size(), main, m1.upper(), m1.lower()); } END_SPECIALIZATION;

This takes care of the "tridiagonal + diagonal" case. I also want to specialize for the "diagonal + tridiagonal" case. In this specialization, I could simply call the add method again, swapping the arguments, but this would incur the cost of a method dispatch. To deal with this kind of situation, yomm11 provides a syntax for re-using the code from another specialization:

BEGIN_SPECIALIZATION(add, matrix, const diagonal_matrix& m1, const tridiagonal_matrix& m2) { return GET_SPECIALIZATION(add, matrix, const tridiagonal_matrix&, const diagonal_matrix&)(m2, m1); } END_SPECIALIZATION;

For this to work, the first specialization must be visible, because its code will be inlined inside the second specialization (if the compiler deigns to honor the inline directive, that is).

Let's run a couple of examples:

show_sum( tridiagonal_matrix::make( 4 , { 1 , 2 , 3 , 4 }, { 2 , 3 , 4 }, { 3 , 4 , 5 }), tridiagonal_matrix::make( 4 , { 0 , 1 , 2 , 3 }, { 1 , 2 , 3 }, { 2 , 3 , 4 })); show_sum( diagonal_matrix::make( 4 , { 1 , 2 , 3 , 4 }), tridiagonal_matrix::make( 4 , { 1 , 2 , 3 , 4 }, { 2 , 3 , 4 }, { 3 , 4 , 5 })); tridiagonal 1 2 0 0 3 2 3 0 0 4 3 4 0 0 5 4 + tridiagonal 0 1 0 0 2 1 2 0 0 3 2 3 0 0 4 3 = tridiagonal 1 3 0 0 5 3 5 0 0 7 5 7 0 0 9 7 diagonal 1 0 0 0 0 2 0 0 0 0 3 0 0 0 0 4 + tridiagonal 1 2 0 0 3 2 3 0 0 4 3 4 0 0 5 4 = tridiagonal 2 2 0 0 3 4 3 0 0 4 6 4 0 0 5 8

Ambiguities

I could continue like that and add more matrix subtypes and more specializations. I must be careful, however, to keep the specializations free of ambiguities. Imagine that I add a spare matrix subtype and this set of specializations:

BEGIN_SPECIALIZATION(add, matrix, const sparse_matrix& m1, const abstract_matrix& m2) { return ...; } END_SPECIALIZATION; BEGIN_SPECIALIZATION(add, matrix, const abstract_matrix& m1, const diagonal_matrix& m2) { return ...; } END_SPECIALIZATION;

I now try to evaluate:

sparse_matrix::make( 3 , 3 , ...) + diagonal_matrix::make( 3 , { 1 , 2 , 3 }); terminate called after throwing an instance of ' yorel::multi_methods::ambiguous' what(): multi-method call is ambiguous for these arguments /bin/bash: line 1 : 18744 Aborted (core dumped) ./matrix

Here the add method hesitates between two specializations. The first one is more specific for the first argument ( sparse_matrix is a subtype of abstract_matrix ). The second one is more specific for the second argument ( diagonal_matrix is a subtype of abstract_matrix ). In this situation, the dispatcher cannot find a single method that is more specific than all the others. It thus throws an exception.

The problem is in fact very similar to ambiguous calls to overloaded functions or to conflicting partial template specializations. So is the solution: providing a specialization that is more specific than both conflicting specializations lifts the ambiguity:

BEGIN_SPECIALIZATION(add, matrix, const sparse_matrix& m1, const diagonal_matrix& m2) { return ...; } END_SPECIALIZATION;

Note that calling next inside this specialization would cause an ambiguity exception.

Compiling and Running the Example

First grab a copy of yomm11 from GitHub, either by downloading the distribution or by cloning the repository. The source code of the matrix example is available from CodeProject, see the link at the top of this article.

You can now build the example with the following command:

g++ -std=c++11 -I ~/yomm11/include matrix.cpp ~/yomm11/src/multi_methods.cpp -o matrix (replace ~/yomm11 with the directory where you unpacked the library). You can also use clang++.

If you like what you see, then you probably want to build and install the library on your system. It is a straightforward process documented here. You can now build the example with the following command:

g++ -std=c++11 matrix.cpp -lyomm11 -o matrix

Conclusion

The yomm11 library provides an implementation of open multi-methods that is both very efficient and easy to use. Both words - "open" and "multi" - are important. It is very useful, in some cases, to take into account the dynamic types of several arguments while dispatching a call. However, polymorphism without class invasion may be an even more important feature. I think that, in many cases, open methods with a single virtual argument are preferable to virtual functions. Open methods improve modularity. They do not violate encapsulation gratuitously. Finally, the little extra work required to make a class hierarchy support efficient yomm11 method dispatch is rewarded by unbounded extensibility.

Next time, I will begin to explain how yomm11 works, starting with the data structures and the algorithms used to fill them.