I highly recommend reading Justin Fagnani’s “Real” Mixins with JavaScript Classes. To summarize my understanding, Justin likes using “mixins,” but takes issue with the way they are implemented as described in things like Using ES7 Decorators as Mixins.

Justin wants to be able to have a fully open many-to-many relationship between meta-objects and objects.

Justin also wants to have mixins be much more equivalent to classes, especially with respect to being able to override a mixin’s method, and to be able to invoke the mixin’s original definition within an overridden method, just as you can invoke a superclass’s definition of a method from within a class’s method.

Finally, Justin wants to create code that existing engines can optimize easily, and avoid changing the “shape” of prototypes.

One of the things I like the most about Justin’s article is that it shines a light on two longstanding debates in OOP, both going back at least as far as Smalltalk. The first is about deep class hierarchies. My opinion can be expressed in three words: Don’t do that! Just about everyone agrees that flattened hierarchies are superior to deep hierarchies, especially when the deep hierarchies are an accidental complexity created by trying to fake a many-to-many relationship using a tree.

The second debate is more subtle, and it concerns overriding methods. It’s a massive oversimplification to suggest that there are only two sides to that debate, but for the purpose of this discussion, there are two different OOP tribes. One of them is called virtual-by-default, and the other is called final-by-default.

virtual-by-default

In languages like Smalltalk and almost every other “dynamically typed” OO descendent, including JavaScript, you can override any method at any level in the class hierarchy. In languages like Javascript and Ruby, you can even override a method within a single object.

When the method is invoked on an object, the most-specific version of the method is invoked. The other versions are available via various methods, from denoting them by absolute name (e.g. SomeSuperclassName.prototype.foo.call(this, 'bar', 'baz') ) or using a magic keyword, super , e.g. super('bar', 'baz') in most languages, or super.baz('bar', 'baz') in JavaScript ES6.

The canonical name for this is Dynamic Dispatch, because the method invocation is dynamically dispatched to the most appropriate method implementation. Such methods or functions are often called virtual functions, and thus a language where methods are automatically virtual is called “virtual-by-default.”

JavaScript out of the box is very definitely virtual-by-default. The technical opposite of a virtual-by-default language is a static-by-default language. In a static-by-default language, no matter whether the function is overridden or not, the implementation to be used is chosen at compile time based on the declared class of the receiver.

For example, making up our own little JavaScript flavour that has manifest typing:

class Foo { toString () { return " foo " ; } } class Bar extends Foo { toString () { return " bar " ; } } Foo f = new Bar (); console . log ( f . toString ());

In a virtual-by-default language, the console logs bar . In a static-by-default language, the console logs foo , because even though the object f is a Bar , it is declared as a Foo , and the compiler translates f.toString() into roughly Foo.prototype.toString.call(f) .

C++ is a static-by-default language. If you want dynamic dispatching, you use a special virtual keyword to indicate which functions should be dynamically dispatched. If our hypothetical JavaScript flavour was static-by-default and we wanted toString() to be a virtual function, we would write:

class Foo { virtual toString () { return " foo " ; } } class Bar extends Foo { virtual toString () { return " bar " ; } } Foo f = new Bar (); console . log ( f . toString ());

After much experience with errors from forgetting to use the virtual keyword, most programming languages abandoned static-by-default and went with virtual-by-default.

final-by-default

If most languages are settled on virtual-by-default, how can there be another tribe? Well, the static-by-default people had two excellent reasons for liking static dispatch. The first was speed, and they loved speed. But as things got faster, that implementation consideration became less-and-less persuasive.

But there was another argument, a semantic argument. The argument was this. If we write:

class Foo { toString () { return " foo " ; } }

We are defining Foo to be:

A class That has a method, toString That returns "foo"

Everyone agrees on the first two points, but OO programmers are split on the third point. Some say that a Foo is defined to return "foo" , others say that it returns "foo" by default, but any subclass of Foo can override this, and it could return anything, raise an exception, or erase your hard drive and email 419 scam letters to everyone in your contacts, you can’t tell unless you examine an individual object that happens to be declared to be a Foo and see how it actually behaves.

When the Java language was released, it was virtual-by-default, but it didn’t ignore this question. Java introduced the final keyword. When a method was declared final , it was illegal to override it, and if you tried, you got a compiler error.

If our imaginary JavaScript dialog worked this way, this code would not compile at all:

class Foo { final toString () { return " foo " ; } } class Bar extends Foo { toString () { return " bar " ; } } // => Error: Method toString of superclass Foo is final and cannot be overridden.

In Java, final was the way you wrote: “This class has a method, and you can be sure that all subclasses implement this method in this way.” In Java, final was optional. So whether by intent or by sheer laziness, most Java methods in the wild are not final.

But many people felt that like C++, the designers got it backwards. They felt that by default, all methods should be final. The special treatment should be for virtual methods, not for final methods.

If our dialect worked like that, all methods would be virtual, but if we wanted to allow a method to be overridden, we would use a special keyword, like default :

class Foo { toString () { return " foo " ; } } class Bar extends Foo { toString () { return " bar " ; } } // => Error: Method toString of superclass Foo is final and cannot be overridden.

class Foo { default toString () { return " foo " ; } } class Bar extends Foo { toString () { return " bar " ; } } //=> No errors, because Foo#toString is a default method.

In essence, the “final-by-default” tribe believe that methods can override each other, but that it should be rare, not common. So when a final-by-default programmer writes this code:

class Foo { toString () { return " foo " ; } }

They are defining the behaviour of all instances of Foo . If they need to override toString later, they will come back and declare it to be default toString () { ... } . That makes it easier to reason about the behaviour of the code, because when you look at at the code for Foo , you know what a Foo is and does, not what it might do but nobody-really-knows-without-reading-every-possible-subclass.

We can think of final-by-default as The paranoid fringe of the virtual-by-default tribe.

The “virtual-by-default” tribe are not impressed. They ask, “if you can’t override, what makes you think you have polymorphism?” Of course, you can have two different subclasses each implement the same method without one overriding the other. And with “dynamic” languages and duck typing, you can have completely different classes implement the same “interface” without any overriding whatsoever. Or you can do all kinds of monkeying about with private methods but always expose the same public behaviour.

In the end, the “final-by-default” people are just as OO as the “virtual-by-default” people, but they spend a lot more time trying to keep their inheritance hierarchies “clean.”

the academic basis for final-by-default

Overriding methods is often taught as a central plank of OOP. So why would there by a hardy band of dissenting final-by-default people?

The problem final-by-default tries to solve is called the Liskov Substitution Principle or “LSP.” It states that if a Bar is-a Foo , you ought to be able to take any piece of code that works for a Foo , and substitute a Bar , and it should just work.

Overriding public methods is the easiest way to break LSP. Not always, of course. If you have a HashMap , you might override the implementation of its [] and []= methods in such a way that it has the exact same external behaviour.

But in general, if you treat methods as “defaults, open to overriding in any arbitrary way,” you are abandoning LSP. Is that a bad thing? Well, many people feel that it makes object-oriented programs very difficult to reason about. Or in plain English, prone to defects.

Another principle you will hear discussed in this vein is called the Open-Closed Principle: “Software entities (classes, modules, functions, etc.) should be open for extension, but closed for modification.”

In our examples above, overriding toString in Bar modifies the definition of Foo , because it changes the definition of the behaviour of objects that are instances of Foo . Whereas, if we write:

class Bar extends Foo { toArray () { return this . toString (). split ( '' ); } }

Now we are extending Foo for those objects that are both a Foo and a Bar , but not modifying the definition of Foo .

The “final-by-default” tribe of OO programmers like their programs to confirm to LSP and Open/Closed. This makes them nervous of language features that encourage overriding methods.

mixins and final-by-default

If you’re a member of the final-by-default tribe, you don’t want a lot of overriding of methods. You don’t want mixins to blindly copy over an existing prototype’s methods just as you don’t want a classes’ methods to will-nilly override a superclass’s methods or a mixin’s methods.

If you’re a member of the final-by-default tribe, every time you see the super keyword, you stare at it long and hard, and work out the tradeoff of convenience now versus potential for bugs down the road.

If you’re a member of the final-by-default tribe, your mixin implementation will throw an error if a mixin and a class’s method clash:

let HappyObjects = final_by_default_mixin ({ toString () { return " I'm a happy object! " ; } }); @ HappyObjects class Foo { toString () { return " foo " ; } }; // => Error: HappyObjects and Foo both define toString

Members of the final-by-default tribe want HappyObjects to describe all happy objects, and Foo to define all instances of Foo . Blindly copying methods won’t protect against naming clashes like this.

Of course, setting mixins up as subclass factories won’t do that either. With subclass factories, we would actually write something like:

let HappyObjects = subclass_factory_mixin ({ toString () { return ' happy ' ; } }); class HappyFoo extends HappyObjects ( Object ) { toString () { return ` ${ super . toString ()} foo` ; } }; let f = new HappyFoo (); f . toString () //=> happy foo

With a subclass factory, you have everything virtual-by-default and overridable-by-default. Which is fine if you aren’t a member of the final-by-default tribe.

there has to be a catch

So, if there are these fancy “Liskov Substitution Principles” and “Open/Closed Principles” arguing for not encouraging overriding methods, what is the catch? Why doesn’t everyone program this way?

Well, convenience. If you can’t override methods (because that modifies the meaning of the superclass or mixin), you need to do something else when you want to extend the behaviour of a superclass or mixin. For example, if you want the mixin for implementation convenience but aren’t trying to imply that a Foo is-a HappyObject , you would use delegation, like this:

class HappyObjects { toString () { return ' happy ' ; } }; class HappyFoo { constructor () { this . happiness = new HappyObjects (); } toString () { return ` ${ this . happiness . toString ()} foo` ; } }; let f = new HappyFoo (); f . toString () //=> I'm a happy foo!

A HappyFoo delegates part of its behaviour to an instance of HappyObjects that it owns. Some people find this kind of things more trouble than its worth, no matter how many times they hear grizzled veterans intoning “Prefer Composition Over Inheritance.”

Another technique that final-by-default tribe members use is to focus on extending superclass methods rather than replacing them outright. Method Advice can help. In the Ruby on Rails framework, for example, you can add behaviour to existing methods that is run before, after, or around methods, without overriding the methods themselves.

In this example, decorators add behaviour to methods that could be inherited from a superclass or mixed in:

@ before ( validatePersonhood , ' setName ' , ' setAge ' , ' age ' ) @ before ( mustBeLoggedIn , ' fullName ' ) class User extends Person { // ... };

Using method advice adds some semantic complexity in terms of learning what decorators like before or after might do, but encourages writing code where behaviour is extended rather than overridden. On larger and more complicated code bases, this can be a win.

People have also investigated other ways of composing metaobjects. One promising direction is traits: A trait is like a mixin, but when it is applied, there is a name resolution policy that determines whether conflicting names should override or act like method advice.

Traits are very much from the “final by default” school, but instead of simply preventing name overriding and leaving it up to the programmer to find another way forward, traits provide mechanisms for composing both metaobjects (like classes and mixins) as well as the methods they define.

is super() considered hmmm-ful?

In JavaScript ES6, we car write super.baz() within a method baz , to denote that we wish to invokes the method it overrides. Overriding any arbitrary method and calling the overridden method when and how you like obviously provides maximum flexibility and convenience. It’s characteristic of the virtual-by-default mindset: Everything can be overridden, in any arbitrary way.

Replacing super.baz() with method advice, for example, requires careful design, but offers an easier way to reason about the code: Looking at a Foo class, you can have confidence that instances of Foo might extend its methods, but you will have a higher degree of confidence how they will behave.

The “Liskov Substitution” and “Open/Closed” principles are guidelines for writing software that is extensible and maintainable, just as “Prefer Composition over Inheritance” expresses a preference, not an ironclad rule to never inherit when you could compose.

So: Is super() considered harmful? No. Like anything else, it depends upon how you use it. Pragmatically, we shouldn’t reject all uses of super.baz() (or as noted, super() in other languages). But we can always stop for a moment and ask ourselves if it’s the best way to accomplish a particular objective. And we ought to understand the alternatives available to us.

(discuss on hacker news)

more reading:

notes: