When I first started learning Scala, one of the most fascinating and scary features for me was implicits. When do you use them? Don’t they hurt readability? How do you discover that an implicit is being used?

As someone who came from Python where explicit is a good word, implicit sounded like it is just hiding stuff from the caller that the caller should probably be aware of. It seemed like a dangerous feature whose only purpose was to save on keystrokes.

I was wrong. Implicits are a powerful mechanism that enable a wide array of programming patterns! Many languages will support one or many of these programming patterns using keywords or special cases. Many have tried to unify these features. In the Python community, for example, using for comprehensions is the preferable way to perform generator packing and unpacking. This style of coding enables a wide group of features for operating over collections under a single swiss-army knife.

Scala hoped to build on the simplicity of the for comprehension by unifying as many features as it can under a single semantic tool. Scala is often criticized for having features that seem more academic than immediately useful, but this I would argue is why Scala is so forward thinking. Scala invested in the idea that people want features that maximize their functionality by having minimal impedance mismatch with other features in the language. As people become familiar with these features separately, they look for a way to simplify these concepts and understand them across languages. Scala makes the bet that “academic,” far from being a dirty word is actually a strength that people will come to find too useful to give up.

I would argue that of all of the powerful and orthogonal features of Scala, implicits enable the most unique and – once you learn when and how to use them sparingly – indispensable language features.

In concrete terms, Scala enables these semantics with implicit parameters and implicit conversions. This week, we will go over implicit parameters and how it enables context passing (passing an object implicit) and type classes (via the concept pattern).

Implicit parameters

In Scala 2.x, you can give any method an implicit parameter list:

// Ordering[Char] is a "concept" describing how to order a sequence of characters def sort(explicitArg: String)(implicit implicitArg: Ordering[Char]) = ???

There is another syntax that desugars into an implicit parameter, called context bounds:

// The [A: Ordering] is called context bounds, and it is meant to create an ad-hoc // bounding condition to the given type A // It means that there must be an implicit concept in scope that describes the // given type A as having the Ordering concept. def sort[A: Ordering](explicitArg: A) = ??? // desugars into def sort[A](explicitArg: A)(implicit `ev$0`: Ordering[A]) = ???

The rules for finding an implicit can be daunting at first, but they are designed to unify disperse semantic patterns without violating any assumptions about potentially overlapping language features. The primary goal of implicit search is to find the most specific type of implicit value or method in lexographic scope to inject to the parameter at compile-time.

“Lexgraphic scope” is just an academic term to distinguish things that can be determined by reading the code syntactically, rather than being based on any values at run-time. By being lexographic, it can be checked at compile-time. This is what makes the tool so powerful.

In fact, Scala gives you a way to write your own compiler error message for missing implicits using an annotation.

@implicitNotFound( "Please define a way to sort instances of ${A} in scope by " + "importing or implementing an implicit Ordering[${A}].") trait Ordering[T] { def compare(x: T, y: T): Int }

The default message is usually good enough, but if you have a particular place that you want the caller to import from you can put it into the compiler error message to make it easy to know what to do next.

Implicit parameters enable a wide range of design patterns. They all have different names, but thanks to the insight of the Scala language designers, they are all implemented with this single feature:

Concept Pattern (Ad-hoc polymorphism) Implicit Contexts (Ad-hoc polymorphism) Type Classes (Ad-hoc polymorphism) Theorem Proving All of these are covered elegantly in Haoyi Li’s article on implicit design patterns.

A good place to start explaining the theory behind implicit parameters is to talk about Context Passing and the Concept Pattern. The concept pattern – a more flexible OOP version of Type Classes using generic interfaces and objects – is a fundamental pattern that implicits were designed to provide.

In the case of context passing, the “context” is typically a single concrete type with everything needed packaged into the object (ex. a database session). The concept pattern is Scala’s way of implementing Haskell-like type classes. In Haskell, all type classes must be “coherent”, which means that there is only one definition for the behavior of a given type. Scala does not enforce this restriction, and instead allows the instance to vary based on what is available in implicit scope so long as it there is only one implicit that matches. In this way, the Concept Pattern can be considered as a general case of Context Passing in which the behavior is parametrized by one or more type arguments and is defined or imported into scope.

In other words, in the concept pattern, the outer type represent a concept about the inner type. Ex: Ordering[Int] tells you the order of integers (within the scope in which the implicit is imported) and DatabaseSession is a concrete concept (aka a context) that is required to execute the function. The main difference between context passing and the concept pattern is their use cases. Typically, when passing a context, the values of the context vary between requests, users, or other contextual information provided to the scope of code in question. With the concept pattern, you are typically dealing with a small number of implementation choices (how many ways are there to sort integers?) and will just import a global value that does not depend on the runtime context. The concept pattern is most often used to implement type classes.

Type Classes

Type Class is often used loosely in Scala as it is often confused with the concept pattern. The idea of a “type class” is inspired by Haskell’s class keyword. It is a way to specify the ad-hoc behavior of a function without having to overload some static method name for every type that is desired. In fact, by avoiding method overloading, Scala made it possible to separate the implementation (type class instance) from the interface (type class definition).

Since defining global type class instances is so common, Scala implements special rules and syntactic sugar. You can define global implicit behavior for a type by putting the implicit implementation of a concept (ex: Ordering.by((u: User) => u.id) ) on the companion class (ex: inside object User ). You can define that a generic type needs a capability on a method by using the context bounds syntax (see code example above for def sort[A: Ordering] ). This hints at the fact that an important use case for implicits is to implement type classes.

The type class pattern is so powerful that Haskell doesn’t even have any other mechanism for interface inheritance. In fact, the young language Rust recognized that for most low-level programming, interface inheritance is less useful then simple functions and type classes. Rust elevated the type class concept into a first class citizen, called trait s.

The only main difference between Scala and its pure language implementation in Rust and Haskell is that Scala does not enforce type class coherence. This is defined as only having a single, unambiguous definition of the concept defined for any given type. Rust and Haskell require coherence and Scala does not. By enforcing coherence and removing object-oriented inheritance, Haskell and Rust are able to avoid any ambiguity about what behavior to use at any call site, and therefore, you don’t need to remember what implicits are in scope as the compiler knows how to unambiguously find it.

This feature fits nicely into a language like Rust or Haskell because they don’t have OO-style inheritance, but is a problem for Scala which has a hierarchical inheritance model, in which you would have to choose one syntax or the other. Scala chose to avoid this problem all together removing “coherence” and providing the same syntactic advantage of being able to resolve ad-hoc functionality at the call-site by choosing certain locations from which to import implicits automatically. Additionally, the Scala compiler provides the same level of type-safety as Haskell by raising an error when it finds multiple instances of an implicit in the same scope, using an ambiguous implicit.

(In a future blog post, I’ll discuss how Dotty / Scala 3 plans to fix this issue using a Coherent trait to tell the compiler to ignore ambiguity errors and treat all available implicit instances as coherent).

trait Show[T] { def show(value: T): String } object Show { def apply[T](implicit show: Show[T]): Show[T] = show def fromToString[T]: Show[T] = new Show[T] { override def show(value: T): String = value.toString } implicit object showString extends Show[String] { override def show(value: String): String = value } implicit val showInt: Show[Int] = fromToString[Int] implicit val showBoolean: Show[Boolean] = fromToString[Boolean] implicit def showSeq[T: Show]: Show[Seq[T]] = new Show[Seq[T]] { override def show(values: Seq[T]): String = values.map(Show[T].show).mkString("Seq(", ", ", ")") } }

This allows the following syntax:

Show[Int].show(1) Show[Seq[Seq[Int]]].show(Seq(Seq(1, 2), Seq(3, 4)))

The reason that we can access a deeply nested Show[Seq[Seq[Int]]] is because Scala wanted to provide as close of an experience to the benefits of having type classes as possible. It gives you the feature of implicit chaining, which essentially allows implicits to recurse over their arguments by expanding an implicit search into a long chain of method calls.

You can see this feature used in the showSeq definition. By putting a context bound on Show , the given type T must have a definition of Show[T] and thus the compiler can continue searching by substituting an implicit method call and searching for the shrunken type.

Explicitly, this would look like:

Show.showInt.show(1) Show.showSeq(Show.showSeq(Show.showInt)).show(Seq(Seq(1, 2), Seq(3, 4)))

The implicit search will continue to expand and recurse until the correct type sequence of method calls can be inlined or it will stop when it detects that continuing to search would result in an infinite loop. The detection mechanism isn’t really important here, but it enables combining multiple implicit chains, but by not having coherence, Scala can throw some pretty cryptic errors about diverging implicits when it detects an infinite substitution loop.

An implicit search will continue to substitute implicits recursively until the concrete type is returned by the fully expanded chain of method calls.

These features combine to allow you to benefit from the composition of type class style inheritance without breaking object-oriented inheritance. But without coherence, Scala also must raise errors about ambiguous and diverging implicit resolution errors.

Theorem Proving

Essentially, this comes from the Howard-Curry Isomorphism, which states that types are theories and programs that compile are proofs of the validity of that theory.

When you require an implicit parameter, you can make the compiler do quite a lot of work using implicit chaining for arbitrarily complex types. When the compiler finds the right implicit to use for the parameter, you can discard the value at runtime and safely make the assumptions about the proven theory at runtime without any runtime checks. This is a powerful way to add type-safety without needing to pay a price at runtime to perform reflection.

case class MySeq[+A](underlying: Seq[A]) { // ... def flatten[B](implicit canViewAsSeq: A <:< Seq[B]): MySeq[B] = { MySeq(underlying.flatMap(canViewAsSeq(_))) } // ... }

In fact, in Predef.scala , you can see where the <:< syntax comes from. It isn’t a special feature of the language. It is library code that is defined in Predef.scala .

@implicitNotFound(msg = "Cannot prove that ${From} <:< ${To}.") sealed abstract class <:<[-From, +To] extends (From => To) with Serializable private[this] final val singleton_<:< = new <:<[Any,Any] { def apply(x: Any): Any = x } implicit def $conforms[A]: A <:< A = singleton_<:<.asInstanceOf[A <:< A]

Explaining the reasoning behind this proof is not the intention of this blog post, but it is quite fascinating that you can prove quite a few theories by leveraging the compiler to do the heavy lifting.

The compiler is just using the rules of implicit searches to provide the ability to prove that one type is a subclass of another by providing an implicit function from one to the other, using .asInstanceOf , but only if the compiler can find some case where -From and +To unify to a single common type A . This indicates that, according to the rules of variance, there must exist a single type A that is a superclass of From and a subclass of To in order provide an implicit From <:< To . If the compiler can find a single instance of this type A , then it can prove that From , by definition, must be a type that is equal to or more specific than To .

Theorem proving is used extensively by the Shapeless library to provide the capability to do some very interested nested shape shifting. For example, this feature relies on the compiler being able to recurse over recursively defined natural number Nat types to resolve to a single instance of any type of collection. The proof uses the Peano axiom to prove that the number of elements of two different Sized collections are the same.

This is obviously an advanced feature that probably only library writers need make use of, but it shows how a single feature in Scala compounds with other features of Scala to provide an immensely powerful set of semantics.

Conclusion

Scala’s implicit parameters are a powerful language tool that allows functions to have context without requiring an Object-Oriented style class to be constructed ahead of time to hold the context. This allows static functions to behave differently in different contexts or on different types. This style of ad-hoc polymorphism is an exciting way to extend a function without having to match on all the possible types within the function. Instead you can require an implicit concept that describes the operation you need to perform on that type.

The concept pattern is very common pattern in Scala – it is the de-facto way to do Json serialization, for example. In many cases when you are sharing code, context bounded methods provide a mechanism to share generic library methods that can operate on additional types without needing to extend all the methods of an interface or sharing an inheritance hierarchy in order to override behavior. This pattern is a bread and butter pattern for ad-hoc method overloading. You should use this pattern liberally in library code. It is often a flexible way to define behavior on other types without having to extend those types. This allows you to layer on functionality from separate libraries to a single type.

Martin Odersky discusses the strengths and weaknesses of inheritance and type-classes in his talk “Scala with Style.” It boils down to what is called the expression problem. The expression problem is a new name for an old problem. The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (ex. no casts).

If you are expecting to extend the behavior of a fixed API over an ever increasing number of subclasses, then you should consider inheritance (ex. a widget library in which much of the shared behavior is unlikely to depend on the subclass, but there will be many subclasses of widgets that will be created to define slight variations). If you are expecting to extend an ever increasing API over a small or fixed number of classes that can passed into these functions, then you should use the concept pattern.