This article explains what ad-hoc polymorphism is, what kind of problems it solves and how to implement the whole thing using the type class pattern.

Types of polymorphism

We’ll start from parametric polymorphism. Say we have some list of items; this could be a list of integers, doubles, strings, whatever. Now consider a method head() which returns the first item from that list. This method doesn’t care if the item is of type Int, String, Apple or Orange. Its return type is the one list is parameterized with and its implementation is the same for all types: “return first item”.

Unlike parametric polymorphism, ad-hoc polymorphism is bound to a type. Depending on the type, different implementations of the method are invoked. Method overloading is one example of ad-hoc polymorphism. For example, you can have two versions of method that appends two items — one that takes two integers and adds them, and one that takes two strings and concatenates them. You know, 2 plus 3 is 5, but “2” plus “3” is “23”.

There’s also a third kind of polymorphism, subtyping polymorphism, in which subclasses provide different implementations of some superclass method. Unlike the ad-hoc polymorphism where the decision on which implementation is being invoked is made at compile time, in subtyping polymorphism it is made at run time (in case of parametric polymorphism there is just one implementation so no decision is being made there).

OK, let’s take a closer look at ad-hoc polymorphism and leave the other two for some other time. Now, I already mentioned overloading as one way to achieve ad-hoc polymorphism; each “version” of the method will take different parameters, and when the method is invoked the correct implementation will be chosen based on the type of provided parameters. But let’s say we have a different scenario — let’s say we want to have just one method (we can call it appendItems()) and we want this method to take two “appendable” items. If invoked with integers, they should be appended using addition. If invoked with strings, they should be appended using concatenation. We could also implement it for various other types, but for our example Integer and String will be enough.

How to append ducks

So, we want the appendItems() method to take two instances of something “appendable” and perform the append operation upon them. We also want this operation to have different implementations for different appendables. For integers append means addition and for strings it means concatenation. This is ad-hoc polymorphism at its best.

Note that our method should have just one implementation — no overloading or overriding! So how can it perform different operations for different types? Well, the idea is that appendItems() should not even know how the append operation is implemented; it should just invoke it. Here’s the method:

def appendItems[A](a: A, b: A) = a append b

This is the tricky part — we need to somehow have an append operation available on integers and strings. How? Well, in the eyes of our appendItems() method, they will not be integers and strings. They will be something appendable. It’s basically a case of duck-typing; if it walks like a duck and it quacks like a duck, as far as we’re concerned — it’s a duck. We don’t care what it really is, all we care about is that it’s able to quack. That’s what we’ll do here, just instead of our values being able to quack, we want them to be able to append.

Of course, the method above doesn’t compile since our generic type A has no method append(). So what do we do? I will explain two approaches to solving that problem — implicit conversions and type classes.

Implicit conversions

To assure the compiler that A will really be appendable, we can use an implicit parameter, a common mechanism in Scala, to provide a conversion:

def appendItems[A](a: A, b: A)(implicit ev: A => Appendable[A]) =

a append b

This method says “I take two items of type A and I also need an implicit conversion from A to Appendable[A] to be available within scope”. We named this implicit conversion “ev” as short for “evidence”, which is one of many common terms for an implicit parameter in scenarios like this (another quite common one is “witness”).

We sort of “promised” the compiler that there will be a conversion from A to Appendable[A] available in scope when appendItems() is invoked. Compiler responds “OK, I will allow you to invoke append() on the value of type A because you promised that there will be an implicit conversion in scope and I’m counting on that”. If there isn’t one at compile time, compiler will be mad because we broke our promise and compilation will fail with an error saying “no implicit view available”. If you’re not familiar with where Scala searches for implicits, what’s the search scope and what gets precedence over what, I would highly recommend going through some materials on that (chapter 21 in Programming in Scala does a good job; you can also take a look in here).

OK, we have the appendItems() method. Let’s move to the Appendable type.

First of all, Appendable is going to be a trait. It cannot be an ordinary class because append() needs to be abstract (it will be implemented separately for Integer and String). It could be an abstract class, but rule of thumb is to prefer an abstract class only if there is a valid reason for that (e.g. we need to take some parameters in the constructor, some Java code will inherit it, efficiency is extremely important etc.).

Secondly, Appendable will be parameterized — it will have a type parameter. Why? Well, because its append() operation (not to be mixed up with our appendItems() method!) is generic; it is going to be implemented differently for different types, in our case as addition for integers and concatenation for strings. Since method append() is dependent of the type, the whole trait is dependent of the type. Note that if it weren’t dependent of the type then this would be a case of parametric polymorphism, but since it is, it’s a case of ad-hoc polymorphism.

So, here’s our nice little type-dependant trait:

trait Appendable[A] {

def append(a: A): A

}

Now that we defined the trait let’s write those two different implementations of it, one for Integer and another one for String. Here they are:

class AppendableInt(i: Int) extends Appendable[Int] {

override def append(a: Int) = i + a

} class AppendableString(s: String) extends Appendable[String] {

override def append(a: String) = s.concat(a)

}

Method concat is here for showcase purposes; I could have written simply (and more commonly) s + other which would concatenate them just as good, but I deliberately wanted it to differ from AppendableInt to emphasize that the implementation is specific for each type.

All that’s left to do now are the implicit conversions that we promised:

implicit def toAppendable(i: Int) = new AppendableInt(i)

implicit def toAppendable(s: String) = new AppendableString(s)

That was easy, wasn’t it? Now we can pass pure integers and strings to appendItems() and everything will type check. Here’s the full code that you can copy/paste and try out:

trait Appendable[A] {

def append(a: A): A

}



class AppendableInt(i: Int) extends Appendable[Int] {

override def append(a: Int) = i + a

}



class AppendableString(s: String) extends Appendable[String] {

override def append(a: String) = s.concat(a)

}



implicit def toAppendable(i: Int) = new AppendableInt(i)

implicit def toAppendable(s: String) = new AppendableString(s)



def appendItems[A](a: A, b: A)(implicit ev: A => Appendable[A]) =

a append b



val res1 = appendItems(2, 3) // res1 is an Int with value 5

val res2 = appendItems("2", "3") // res2 is a String with value "23"

Quick recap of what we did: we have a method appendItems() that takes two items of type A and appends them using append(). Even though we expect A to be Int or String (which know nothing about append() method) everything will be fine because we provided implicit conversion as evidence. We “promised” the compiler that A will be convertible to Appendable. We keep our promise by defining two conversion methods called toAppendable, one from Int to Appendable[Int] and another one from String to Appendable[String]. Compiler is happy to see that we really did provide appendItems() with an implicit conversion so the whole thing works out perfectly and we are able to append our basic types Int and String with just one method even though they don’t extend any common trait.

Type classes

Implicit conversions were fun. Type classes are even more fun — they are more flexible and therefore more powerful. But instead of taking my word for it, read on and see for yourself.

Type classes are a concept originating from Haskell. Easiest way to describe them in terms of what we learned so far is that instead of implicitly converting to classes AppendableInt and AppendableString, we’ll make those classes themselves implicit. This means that we need to change our appendItems() method a little. It still takes two items to be appended, but instead of requiring an implicit conversion, it now requires an implicit class instance:

def appendItems[A](a: A, b: A)(implicit ev: Appendable[A]) =

a append b

Unlike in Haskell, type classes are not an existing structure in Scala and they need to be modeled (similar to monads in my previous blog post). We usually do it by packing together a trait which defines the type class interface and implementations of that interface for various concrete types. Let me provide you with a two dozen lines of code which demonstrate that concept in its full glory:

trait Appendable[A] {

def append(a: A, b: A): A

} object Appendable {

implicit val appendableInt = new Appendable[Int] {

override def append(a: Int, b: Int) = a + b

}

implicit val appendableString = new Appendable[String] {

override def append(a: String, b: String) = a.concat(b)

}

} def appendItems[A](a: A, b: A)(implicit ev: Appendable[A]) =

ev.append(a, b) val res1 = appendItems(2, 3) // returns 5

val res2 = appendItems("2", "3") // returns "23"

See? We have a trait Appendable[A] and we have two instances of Appendable, one for Int and one for String.

One of the cool things about type classes is that it’s easy to extend libraries without touching existing code. For example, if you want to support some types other than Int and String, you just need to provide the implicit values for those types. You can import them from somewhere (just like we did in the first line of TypeClassTest) or define them yourself. This way you can not only provide implementations for some new types, but also override implementations for existing types, e.g. to append integers using multiplication instead of addition.

So when writing your own type class, it’s a nice convention to provide some initially supported default types in the type class companion object (it’s a convention, not a strict rule). Other users of your type class can then choose to use those default ones by importing the type class object or some other ones, either provided by someone else or themselves.

Here’s an example of overriding an implementation for an existing type, Int:

trait Appendable[A] {

def append(a: A, b: A): A

}



object Appendable {

implicit val appendableInt = new Appendable[Int] {

override def append(a: Int, b: Int) = a + b

}

implicit val appendableString = new Appendable[String] {

override def append(a: String, b: String) = a.concat(b)

}

}



implicit val appendableInt2 = new Appendable[Int] {

override def append(a: Int, b: Int) = a * b

}



def appendItems[A](a: A, b: A)(implicit ev: Appendable[A]) =

ev.append(a, b)



val res1 = appendItems(2, 3) // returns 6

Keep in mind that implicits are here to make your life easier. The whole thing could be done without them — for example, we could omit the “implicit” keyword from appendableInt2 and pass that object explicitly as appendItems(2, 3)(appendableInt2). Using implicits makes things simpler, as long as you use them reasonably (having tons of implicits all around the place would be pretty messy and it would be hard to keep track of which exact implicit is being invoked at which place).

So, to summarize:

Type class is a concept of having a type-dependent interface and implicit implementations of that interface, with separate implementation for each supported type. In Scala the best way to model this is to use a parameterized trait and to put implicit implementations of that trait for supported types in the trait’s companion object (instead of implicit values you can also use implicit objects, but why pollute the namespace with extra types if you don’t have to). Type classes make it easy to add implementations for new types or to override implementations for existing types. Since these implementations are implicit, one must take into account precedence of implicits in Scala when reasoning about which one will be actually used.

View and context bounds

Ad-hoc polymorphism has been explained and implemented with two different approaches: implicit conversions and type classes. Now, there are two constructs in Scala, namely view bounds and context bounds, that provide some extra syntax sugar for working with those two approaches.

Let’s start from view bounds. Here’s how we could re-write our appendItems() method in implicit conversions scenario:

// def appendItems[A](a: A, b: A)(implicit ev: A => Appendable[A]) =

// a append b def appendItems[A <% Appendable[A]](a: A, b: A) = a append b

See that [A <% Appendable[A]]? That’s a view bound. It is interpreted as “method appendItems() is parameterized with A and it requires that there must be an implicit conversion from A to Appendable[A] available in scope”. This is the same behavior as before, just another syntax.

Now the context bound. Here’s how we could re-write our appendItems() method in type class scenario:

// def appendItems[A](a: A, b: A)(implicit t: Appendable[A]) =

// t.append(a, b) def appendItems[A : Appendable](a: A, b: A) =

implicitly[Appendable[A]].append(a, b)

This time we parameterized with [A : Appendable], which is interpreted as “method foo() is parameterized with A and it requires that there must be an implicit value Appendable[A] available in scope”. Again, the same behavior as in the original type class example, just different syntax. Note that we have one extra bit of new syntax in the method body — the implicitly keyword. It is used to reference an instance of Appendable[A] that has been provided (since we no longer have “ev”).

Note that context bound construct is not strictly related to just type classes. It simply requires, given some type A, that an object or value of type Appendable[A] be implicitly available at point of invocation. It doesn’t care if we have some common parameterized trait or not. So, context bounds can come in handy when working with type classes, but you shouldn’t view them as type class syntax sugar; they’re a bit more general and not tied just to type class pattern.

The big question — who wins?

Actually, I already spoiled the answer to that question by mentioning that type classes provide us with more flexibility and power. If you payed enough attention, you may have even realized that yourself by now (hint: type class can have *multiple* implementations for the same type). But let’s back that up with a real world showcase.

Let’s say we want to sort the sequence of integers. We could use method sorted() from Scala standard library:

def sorted[B >: A](implicit ord: math.Ordering[B]): Seq[A]

Ordering is a type class; you can tell from the docs:

Ordering’s companion object defines many implicit objects to deal with subtypes of AnyVal (e.g. Int, Double), String, and others.

See? Its companion object defines implicit objects to deal with various types, just like our Appendable object had implicit objects AppendableInt and AppendableString.

We learned by now what is sorted() trying to say with its signature. It says “when I’m invoked on a sequence of items of type A, I will sort that sequence, but in order to know how to compare items of type A there needs to be an implicit Ordering in scope”. By the way, note that Ordering needs to be parameterized with A or some supertype of A. This makes sense; if Integer is a Number, and I know how to sort Numbers, then I know how to sort Integers. But that’s beyond the point here, just wanted to clear that up.

Now, method sorted() didn’t always look like this. Ordering trait (we can also call it “type class Ordering”) is a new hipster kid on the block; before it, the streets belonged to the Ordered trait. This was a plain, simple trait; your class extends it, implements its compare() method and it can be sorted. What’s wrong with that? Nothing, until you reach the point where you want to support more than one sorting criteria (e.g. you have strings containing only numbers and you want to be able to choose whether to sort them alphanumerically so that “11” comes before “2” or naturally so that “2” comes before “11”).

Type classes, as we discussed earlier, provide us with this kind of flexibility. So instead of having my class extend Ordered trait and provide just one hardcoded criteria for sorting, with Ordering trait it doesn’t have to extend anything. Instead, when I will be doing the sorting, I will provide an Ordering that suits my needs — I will import an alphanumerical ordering, a natural one or some third kind. Hell, maybe I’ll write my own and provide that. It even says so in the docs for Ordering (bolding done by me):

[Ordering] and scala.math.Ordered both provide this same functionality, but in different ways. A type T can be given a single way to order itself by extending Ordered. Using Ordering, this same type may be sorted in many other ways. Ordered and Ordering both provide implicits allowing them to be used interchangeably.

So, type classes have demonstrated a greater flexibility, which also means greater power. As far as view bounds vs context bounds are concerned, things are even more clear — view bounds have been deprecated and have therefore completely lost the fight. Martin Odersky himself said:

Context bounds are essentially the replacement of view bounds. It’s what we should have done from the start, but we did not know better back then.”

Poor view bounds. Implicit conversions are still in the game; sometimes you don’t really need type classes and using implicit conversions makes things simpler. View bounds, however, completely lost the fight against context bounds. The only reason why we were even talking about them is because they are still very much present in lots of existing codebases, so if you run into them you will know what’s going on.

Conclusion

As I said, you can still use implicit conversions for simple stuff. They’re fine for specific, non-modular, ad-hoc conversions. For example, if you have a standard MVC stack where controller uses services which operate on the database, you may want to automatically transform each database response into service response, and service response into controller response. No need for type classes there. However, when writing reusable, modular, generalized code that will be used and extended by others — gravitate towards type classes.

Regarding view and context bounds, if you want to follow the best practice, you should always prefer context bounds. If you really need to use the view bound, i.e. you want to make a method require that there is some implicit conversion, you can put that conversion in the implicit object and use the type class pattern with a context bound. Remember that this way you can provide different conversions, which gives you more flexibility.

As for the context bound syntax, using implicit parameter(s) instead (like we used “ev” in our example) is completely valid. Which style to prefer is a matter of personal taste and existing coding conventions in your team.

Just before we say goodbye, let me just state for the record that there are situations when type classes are just one of the possible solutions. For example, type classes are great if you want to have some generic code that works for many different types that don’t share a common base class. But if you have types that do share the same base class (e.g. Apple, Banana and Cherry all extend Fruit) then you can e.g. override Fruit’s common method in each fruit instance, so that apples bananas etc. all have different implementations of the same method. However, I have to be honest and say that I’m not a huge fan of subtyping so even in that case I would go for a type class. Subtyping doesn’t even exist in pure FP languages like Haskell. And in Scala it confuses the compiler in some situations and confuses the user in others. Just try having two classes Sub and Sup where Sub extends Sup, have a method that takes an implicit parameter of type Sup and have implicit values for both Sub and Sup available in scope; what do you think happens? Ambiguous implicit values error? Nope. Sup is used, since Sup is needed? Nope. What happens is that Sub is used, because compiler sees them as different enough to not have the ambiguousness, but equal enough to decide that not only is Sub actually a perfectly valid Sup, but it’s even the “better” one since it’s more specific. Now you see the problem with subtyping. Main rule of OOP inheritence is the “is a” rule, so Apple is a Fruit, but from the type system point of view it’s not. Type Apple is type Apple, type Fruit is type Fruit. This is the reason why extending case classes is deprecated (perhaps even not possible any more, I’d have to check). Because how do you treat Apple(“red”) equals Fruit(“red”)? True or false? OK I’m digressing too much, perhaps I should write a separate blog post on this topic. But yeah, whenever someone says you can simply use subtyping, feel free to say “I’d rather stick with true FP concepts”.

That’s it. Feel free to contact me at sinisalouc@gmail.com with any feedback you may have, good or bad, and also to follow me on Twitter.