In previous posts, we covered most popular implicit use cases. What is left to complete the picture is the implicits mechanics itself.

Implicit resolution

To have a complete understanding of implicits we should understand how compiler decides what goes into the scope and which implicit should be used.

The high-level descriptions are the following:

implicit is a mapping from type T to a value t: T ,

to a value , in order to find that value, Scala find all definitions in scope that might fit the definition , and then takes the most specific definition ,

, and then , if it cannot point to a single definition that is the most specific, a resolution fails as the implicit is ambiguous ,

, if it has such definition which builds the value incrementally from other implicits and they are ambiguous, a resolution fails with divergent implicit expansion,

expansion, of course, if there is not a single implicit that can match the definition then it is not found. This includes the situations when there is a definition for your type T , but it misses one definition for implicits required to build the value.

That begs 2 questions: that goes into the scope?, and what does it mean for a definition to be more specific?

Implicit scope

Scala will put into the implicit Scope the following definitions:

implicit definitions defined in or passed into the current scope,

implicit definitions defined is superclasses the current trait/class/object extends,

implicit definitions imported to the current scope, basically anything that you can access directly (without any prefix prefix.something )

implicit definitions defined in companion objects or types related to the type T T T you are resolving for: if T T T is compound type T 1 T_1 T 1 ​ with … with T n T_n T n ​ then all implicits from all companions of T 1 T_1 T 1 ​ , …, T n T_n T n ​ , if T T T is a parametrized type S [ T 1 , . . . , T n ] S[T_1, ..., T_n] S [ T 1 ​ , . . . , T n ​ ] then implicits from all companions of S S S , T 1 T_1 T 1 ​ , … T n T_n T n ​ , if T T T is a singleton type t.type then all implicits available for t , if T T T is a type projection S S S # U U U then all implicits available for S S S and T T T , if T T T is a type alias then all implicits for its expansion, if T T T is an abstract type, all implicits for its upper bound, if T T T is about implicit conversion with arguments T 1 T_1 T 1 ​ , …, T n T_n T n ​ to type U U U then all implicits from companions T 1 T_1 T 1 ​ , …, T n T_n T n ​ and U U U , implicits for parts of quantified (existential or universal) and annotated types (e.g. T forSome { types } ) are in implicit scope of T T T , if course T T T own companion.

you are resolving for:

As we can see compiler has a lot of implicits to consider for even remotely more complex type than a non-parametric type that extends nothing. As a rule of thumb we can assume, that if U U U is somehow used in T T T’s definition, its companion object will be scanned.

Without these properties, type class derivation would be much more troublesome, but probably a lot faster.

Best definition

All right, we got a set of potentially tons of implicit definitions, and we have to decide on just one of them. How the compiler makes that decision? The same way it decides on overloading resolution - choosing the most specific alternative:

for each 2 definitions A A A , B B B calculate the relative weight: if one is as specific as the other - weight 1 is is, 0 otherwise if one is class/object being derived from the other class/object - weight 1 is is, 0 otherwise sum the values

, calculate the relative weight: if relative weight for ( A , B ) (A, B) ( A , B ) is greater than ( B , A ) (B, A) ( B , A ) , then A A A is more specific than B B B ,

is greater than , then is more specific than , if relative weight for ( A , B ) (A, B) ( A , B ) is smaller than ( B , A ) (B, A) ( B , A ) , then B B B is more specific than A A A .

Intuitively, for one definition to be as specific as the other, if you could replace one call with the other without issue with types (for more formal definition look at the specification).

For class C C C to be derived from D D D, C C C must be a subtype of D D D, or C C C/ D D D must be a companion object(s) of such classes that one extends the other (again, informal definition, take a look at the specification).

Considering that we use terms as specific as and more specific than, we are actually considering a subset of a partial order and then looking after the minimum of that subset (or maximum depending on how we define it). It makes sense as type hierarchy itself would also be a partial order.

Troublesome cases

Most of the time you’ll such definitions will result in pretty understandable behavior. There are some exceptions though.

What if you have 2 definitions, that are different, BUT could be used produce the same type? If their ranking decides that one is not more specific than th other, you’ll get an error (ambiguous implicit or diverging implicit expansion).

class A class B [ T ] object A { implicit val ai : B [ A ] = new B [ A ] { override def toString = "A" } } object B { implicit val bi : B [ A ] = new B [ A ] { override def toString = "B" } } implicitly [ B [ A ]] // error: ambiguous implicit values

One way of handling such situation will be by declaring/importing the implicit manually - if it was imported from the companion, by putting it directly into the scope, the relative weight ranking algorithm will declare local implicit as more specific than one from the companion.

import A._ implicitly [ B [ A ]]. toString // "A"

That would be the issue though if you are working on automatic type class derivation and you wanted for both rules to appear in the same companion object, handling different cases.

class A object A { implicit val a = new A { override def toString = "A" } } class B object B { implicit val b = new B { override def toString = "B" } } class C object C { implicit def a2c ( implicit a : A ) = new C { override def toString = a . toString } implicit def b2c ( implicit b : B ) = new C { override def toString = b . toString } } implicitly [ C ] // error: ambiguous implicit values

In such case we might affect the relative weight algorithm, by having them defined in different classes/traits:

class C object C extends LowPriorityImplicits { implicit def a2c ( implicit a : A ) = new C { override def toString = a . toString } } trait LowPriorityImplicits { self : C. type => implicit def b2c ( implicit b : B ) = new C { override def toString = b . toString } } implicitly [ C ]. toString // "A"

As a matter of the fact, this is quite a popular pattern. You can find LowPriorityImplicits in scala.Predef and in virtually all more complex shapeless-based projects (e.g. the majority of Typelevel projects).

In some cases neither of these is applicable. Often in case, you have some sort of cyclic dependency between implicit definitions. As Scala compiler has troubles with such definitions at the moment, shapeless provided Lazy macro - it defers the resolution of some dependency and thus breaks the cyclic dependency.

In future versions of Scala Lazy should no longer be needed, as it will be replaced with by-name implicits.

Debugging hints

The last topic about implicits is how to debug them. Let’s face it: it is not easy.

Well, there is a flag -Xlog-implicits which prints some internal messages… but it generates tons of logs, as we always use tons of implicits - even if we don’t know about them. An example is CanBuildFrom type class which is used to describe if the transformation we are doing currently is allowed (e.g. turning a collection of pairs into Map ). So if you are (flat)mapping collection implicits are there.

Some improvement over it might be the splain compiler plugin. Its author aims to make the compiler generate better error messages about implicits, which is why the plugin creates a pretty print with an implicit derivation tree.

If we are trying to debug e.g. type class derivation, my first idea is to run REPL. Then import things, so that your scope possibly close to the one you are trying to fix (you might need to commend something out, in order to make project compile and run console). Then, you have 2 tools at your disposal:

implicitly[T] provided by scala.Predef ,

provided by , shapeless[T] from the shapeless library - it works similarly to implicitly but provides more information.

sealed trait Natural sealed trait Zero extends Natural object Zero extends Zero class Succesor [ N <: Natural ] extends Natural trait Add [ N <: Natural , M <: Natural ] { type Result <: Natural } object Add { implicit def zeroCase [ M <: Natural ] : Add [ Zero , M ] { type Result = M } = null implicit def succCase [ N <: Natural , M <: Natural ( implicit add: Add [ N , M ] ) : Add [ Succesor [ N ] , M ] { type Result = Succesor [ add.Result ] } = null } implicitly [ Add [ Succesor [ Zero ] , Zero ]] // Add[Succesor[Zero], Zero] = null import shapeless._ shapeless . the [ Add [ Succesor [ Zero ] , Zero ]] // Add[Succesor[Zero], Zero]{type Result = ammonite.$sess.cmd6.Succesor[ammonite.$sess.cmd6.Zero]} = null

Another tool you should have under your belt is scala.reflect.runtime.universe.reify :

show ( reify ( implicitly [ Add [ Succesor [ Zero ] , Succesor [ Zero ]]]). tree ) // String = "Predef.implicitly[cmd7.Add[cmd6.Succesor[cmd6.Zero], cmd6.Succesor[cmd6.Zero]]](cmd7.this.Add.succCase(cmd7.this.Add.zeroCase))"

These examples should give you the general idea how these utilities work.

If I wanted to debug shapeless-generated type class I would try to:

generate Generic.Aux (maybe using some utility function to avoid passing the other type parameter myself),

(maybe using some utility function to avoid passing the other type parameter myself), check if derivation of intermediate HList or Coproduct works,

or works, check if you are able to convert generic version back to specific.

If you are terrified, that you would have to write by hand a lot of intermediate implicits, remember: you are testing derivation and resolution in REPL, not writing a production code. Usage of null.asInstanceOf[X] and ??? is perfectly fine for populating the scope with mock data.

In some cases (not many though), IDE can help you. IntelliJ has an option for showing implicits. By putting the cursor in the place you are interested and selecting View > Implicit Parameters (or Ctrl + Shift + P on non-Mac computers) you will be able to show from where the supplied implicit came from. As far as I remember it is far from perfect, but it’s better than nothing.

One last thing you can do is to customize error messages using @implicitNotFound and (since 2.12) @implicitAmbiguous . If you know beforehand, that your trait/class require some import, local definition or that the enclosing class extends some trait you can leave, that information here. This way in the future, when you forget about the assumptions error message can get you back on track much faster than if you had to reverse-engineer your thinking from the past. Same with colliding implicits - if you know which imports might clash, you can leave yourself a reminder in the annotation.

Summary

Implicits are a really useful mechanism without which Scala would not be the language it is today. They make it possible to automate a lot of code generation (type class derivation) and - if used properly - they can help express assumptions about your code (type bounds, implicit evidence).

However, this mechanism comes with a cost: they are hard to debug, might require in-depth knowledge about how resolution works and generate a lot of work for the compiler. As such, they might become dangerous in inexperienced programmers hands.

In the end, each team/project needs to weight the pros and cons themselves basing on their own values, preferences, and limitations to answer questions like how much work they want to move on to the compiler, and how much extra compile time is bearable.