Martin Odersky gave an excellent talk at Scala Days 2017 about implicits in Scala, when to use them, and how they will work in Dotty (soon to be Scala 3). I would like to examine how they work in Scala 2.x and some of the edgier cases that I have run into for which there are some best practices you can use to avoid them in preparation for Scala 3, where many of these issues will have been resolved.

Implicits are a rough but essential cornerstone of Scala. In summary, they enable library developers to create compact domain-specific languages and enable many the features of functional programming, such as type-classes and theorem proving, as well as solve some lingering problems in object-oriented programming, such as late trait inheritance and extension methods (for allowing concrete classes to package additional implementation details outside of a core library).

The problem is that there are many challenges to safely using implicits and many edge cases have been discovered around the implementation of implicits in Scala 2.X that developers must be aware of and avoid. In my previous posts for part 1 and 2, I discussed the proper usage and dangerous edge cases of implicit parameters and implicit conversions. In addition to issues with odd behavior resulting from poorly defined implicit conversions, there are some syntactical design flaws that can cause friction.

Implicit Window Pain

For an example of a syntactic issue, since implicit parameter lists are invoked the same as a normal parameter list, you can accidentally bump into an implicit parameter list that you didn’t know was there:

trait Add[T] { def add(a: T, b: T): T } object Add { implicit object AddInt extends Add[Int] { override def add(a: Int, b: Int): Int = a + b } } def adder[T: Add](value: T): T => T = implicitly[Add[T]].add(value, _) val addOne = adder(1) println(addOne(2)) // prints 3 // if adder(1) returns a function that takes an int, then surely this should work... println(adder(1)(2)) // Nope... it throws compiler error because it expects Add[Int] not Int

Since the implicit parameter list must always be a single parameter list at the very end of a method, it makes it difficult to get the syntax you might want to allow without having to jump through some hoops, like using your own function-like container that adds implicits to the end.

class Adder[T] { def apply(value: T)(other: T)(implicit adder: Add[T]): T = adder.add(value, other) } def adder[T]: Adder[T] = new Adder[T] val addOne = adder(1) println(addOne(2)) // prints 3 println(adder(1)(2)) // prints 3

Implicit Confusion

Another common and painful mistake is to use implicit conversions to blend two types that share common method names. In fact, it’s almost impossible to avoid this issue because all objects share the equals , hashCode , and toString .

In the Scala standard library, in the scala.collection package you can find JavaConversions and ScalaConversions . These objects contain implicit conversions between Scala Iterables and Java Iterables along with many other collections. The problem is that having conversions that can go in both directions is dangerous. It makes the meaning of a piece of code depend on the implicit context of how you got to that line of code.

Scala’s collections are often immutable and have a deep definition for equality. Java’s collections are often mutable and have a shallow definition of equality. This is a very different set of behaviors, and whether you are dealing with a Scala collection or a Java collection depends on what method you call. If you call .equals , you might be talking to the Java collection, but if you call .map you’ll be talking to a Scala collection. This can create an immense amount of miscommunication.

As mentioned in my previous post, the best workaround for this is to add explicit extension methods that will perform the work. You can pass any necessary implicits to that method as well.

Dotty and the Future of Scala

Researchers at EPFL have been working on a new compiler for a Scala-like language called “Dotty” which is planned to replace the scalac compiler to become the standard compiler for Scala 3. This transition could take several years as many projects will need to be cross-compiled in both both major versions of Scala.

The primary goal of Dotty is to simplify the type system by strengthening the ability of the compiler to resolve dependant object types correctly. Scala already has inner classes, objects, and type aliases, however, they are not quite capable of unifying type parameters with abstract type parameters, supporting union types, etc.

It turns out that the lack of dependent object type resolution was also a big problem for figuring out how to support implicits in a way that is context-aware. This is the source of much of the idiosyncrasy of Scala’s implicit language feature.

One thing that will be receiving significant attention in Dotty, and thus in future versions of Scala, will be to refine the power of implicits to solve more programming problems using stateless and functional mechanisms while removing the dangerous and sharp glass windows left around by implicit conversions .

If this interests you, I highly recommend checking out Martin Odersky’s keynote at Scala Days 2017. It was an eye opener for how we might write functional programs with a lot less boilerplate in the future.

To give you a taste, in case you don’t have time to watch the keynote, I’ve compiled a couple of the examples from the presentation in Dotty. I will demonstrate some of the upcoming features and the (I think) profound impact they will have on the way we write Scala code.

Ambiguity and Coherence

One rule about Haskell’s typeclass that Scala did not implement is coherence. There are good reasons for this as coherence is strict and prevents some useful programming patterns with implicits, such as defining multiple ways to sort integers with Ordering[Int] . However, without the ability to opt-in to declaring that a type is coherent, you cannot leverage the compiler to resolve simple and obvious ambiguities.

For example, let’s say you want to add some compile-time constraints requiring implicit capabilities in your code.

trait CanDrive trait CanDriveCar extends CanDrive trait CanDriveTruck extends CanDrive def driveToStore(implicit cd: CanDrive) = ??? def driveTruckForWork(implicit cd: CanDriveTruck) = ??? def driveHome(implicit cd: CanDriveCar) = ??? def driveToWorkAndBack(implicit cdc: CanDriveCar, cdt: CanDriveTruck) = { driveToStore // compiler error because the compiler doesn't know whether to pass CanDriveCar or CanDriveTruck driveTruckForWork driveHome }

What you’ll notice is that the compiler is confused by something that should be pretty obvious to everyone who reads the code. It doesn’t matter which capability you pick (in this case, whether you drive your car or your trunk to the store) because both represent a capability that driveToStore is able to accept and handle appropriately. The compiler could plug either instance into the driveToStore method, so long as these types follow the laws of coherence, then they will behave the same regardless of which instance is chosen by the compiler.

Scala now provides the ability to opt-in to coherence. Just extend scala.typeclass.Coherent to get the ability to write coherent type classes that will work just as you would expect if you come from Haskell.

Implicit Functions

I want to turn now to probably the juiciest new language feature: implicit functions

Just as you have Function[A, B] , often written as A => B , you will now have ImplicitFunction[A, B] , which can be written as implicit A => B . In other words, implicit is now a part of the type declaration of a function.

case class Request(headers: Map[String, String]) case class User(name: String) case class Context(user: User, request: Request) object Context { def user(implicit ctx: Context): User = ctx.user def request(implicit ctx: Context): Request = ctx.request } object DottyImplicits { // notice: the type of f contains the word "implicit" def withImplicitContext(f: implicit Context => Unit): Unit = { implicit val ctx = Context(User("Jeff"), Request(Map())) // framework code would provide this context f } def main(args: Array[String]): Unit = { // notice: the following line doesn't require { implicit ctx => ... } withImplicitContext { println(Context.user.name) // compiles } Context.user.name // doesn't compile } }

This is a subtle change, but it has profound effects on the way you can pass implicit context between functions.

In the past, you would have to specify implicit context => at the beginning of every block where you wanted an implicit context, and if you had multiple implicits, you would have to import them explicitly. This made context passing via implicits a bit fragile as you would have to make thousands of source code changes in order to add additional context. Now, this can all be resolved implicitly and is available to the developer to inspect based on the type signature.

This makes implicits more composable as the can now be abstracted into type aliases in just the same way as normal functions and provide the feeling of automatically imported implicit context. This makes it an ideal candidate for creating domain-specific languages in which the context of the domain can be passed along with return values of a function. This allows you to have a narrow scope in which a context is implicitly available without having to import a ton of implicits ahead of time (like at the top of a file import the.entire.universe._ ).

Additionally, implicit functions are actually just syntactic sugar for inlined method invocations, so they are heavily optimized for performance. You can’t abstract over implicit functions in the same way that you can with functions, because they are not objects at runtime, but you can easily convert an implicit function into a function object. In fact, Scala will do this automatically if you attempt to pass an implicit function as an explicit function of the same type.

object Example { case class A(msg: String) val f: implicit A => Unit = { implicit a => println(a.msg) } val g: A => Unit = { implicit a => f } // does not compile // val c1: A => Unit = f andThen { _ => println("goodbye world") def fnOp[A, B, C](fn1: A => B, fn2: B => C): A => C = fn1 andThen fn2 // implicitly converts f to a function val c2 = fnOp(f, _ => println("goodbye world")) } // must pass the argument explicitly here Example.c2(Example.A("hello world")) // prints: // hello world // goodbye world implicit val a: Example.A = Example.A("hello implicit world") Example.f // prints: // hello implicit world

Although this may seem like just syntactic sugar, it enables a whole style of programming using context-aware blocks of code much easier without the need for thread-locals and without the hassle of putting implicit parameter lists everywhere. Additionally, you can have implicit conversions between implicit contexts so that your library can pass along and adapt a local context without burdening the developer and without relying on global scope.

I don’t want to repeat too much of the talk that Odersky gave at ScalaDays 2017, but as a teaser, here is some syntax that dotty enables by including the notion of implicit functions:

case class Table(rows: Seq[Row]) class TableCtx(var rows: Seq[Row] = Seq.empty) { def add(row: Row): this.type = { this.rows :+= row this } } case class Row(cells: Seq[Cell]) class RowCtx(var cells: Seq[Cell] = Seq.empty) { def add(cell: Cell): this.type = { this.cells :+= cell this } } case class Cell(content: String) object Tables { def table(mutate: implicit TableCtx => Unit): Table = { val table = new TableCtx mutate.explicitly(table) Table(table.rows) } def row(mutate: implicit RowCtx => Unit)(implicit table: TableCtx): Unit = { val row = new RowCtx mutate.explicitly(row) table.add(Row(row.cells)) } def cell(value: String)(implicit row: RowCtx): Unit = { row.add(Cell(value)) } def main(args: Array[String]): Unit = { val example = table { row { cell("1") cell("2") } row { cell("2") cell("4") } } println(example) } }

The result of running this program is an immutable table built by a mutable DSL.

Table(List(Row(List(Cell(1), Cell(2))), Row(List(Cell(2), Cell(4)))))

As you can imagine, there are so many use cases for passing context like this. You can pass implicit database sessions, request sessions, compiler checked permission and effect capabilities ( CanWriteToDisk , CanChargeCard , CanSendEmail ), as well as configs. These contexts can be built to contain the dependencies for performing certain functions that you don’t want to hard-code into global space. explicitly define at the start of each function.

You’ll notice a couple things about implicit functions.

You don’t need to name the implicit arguments at the start of the function.Instead of: val fn: X => Unit = { implicit x => doSomethingWithImplicitArgumentX() }

You can just write:

val fn: implicit X => Unit = { doSomethingWithImplicitArgumentX() }

Passing explicit arguments to implicit functions may require calling .explicitly

NOTE: Although this was mentioned in the keynote of Scala Days 2017, it doesn’t appear to be implemented in Dotty yet. This feature is currently under discussion, it requires a lot of breaking changes from Scala 2.

Since passing implicit arguments explicitly can conflict with returning functions (aka. currying), we have to be explicit about calling them to avoid ambiguities.

The following code still works in Dotty, but if the change were adopted to require .explicitly , the following code wouldn’t work:

def row(mutate: RowCtx => Unit)(implicit table: TableCtx): Unit = { val row = new RowCtx mutate(row) table.add(Row(row.cells)) }

Instead, we would have to pass our explicit arguments like so:

def row(mutate: implicit RowCtx => Unit)(implicit table: TableCtx): Unit = { val row = new RowCtx mutate.explicitly(row) table.add(Row(row.cells)) }

The other way to pass arguments is to just make the argument implicit before calling the implicit function.

def row(mutate: implicit RowCtx => Unit)(implicit table: TableCtx): Unit = { implicit val row = new RowCtx mutate table.add(Row(row.cells)) }

This seems to be the preferred method right now in Dotty, but I hope the .explicitly method will make its way into the language. The syntactic corner-cases brought about by implicits puts a barrier up when deciding whether to use implicits or not. While passing implicit args the same as you would explicit args has its appeal, the end result is usually confusion about when to use one or the other. If implicits are to be more widely used throughout Scala, I think there needs to be less ambiguities at the call-site.

Implications

The biggest takeaway from this addition of implicit functions is that the implicit keyword has now made its way into the type system. This means you can abstract over it in similar ways to how functional programming libraries are able to abstract over functions. There are some caveats around this. Namely, you cannot treat implicit arguments the same as you would with a function object (ie. calling .compose or .andThen to get a new implicit function). This is so that the Scala compiler can optimize implicit functions into methods with implicit arguments. However, you can convert implicit functions into normal functions by annotating them as functions or passing them to a function that expects an explicit function argument.

After some tinkering with composing implicit functions, I started to realize that since you don’t have to care about the order of implicit arguments, you can just return a block of code with the correct implicit function type on the left-hand side and use all the arguments as if they were available in implicit scope. If you depend on the result of another implicit function, you can easily define the result of that computation as implicit or use a loan pattern to nest implicit function bodies with non-conflicting or coherent implicit type objects in scope.

class Session class DBConnection object DBConnection { def fromSession(implicit session: Session): DBConnection = new DBConnection } case class User(id: Int) case class Data(value: String) def callDatabase(userId: String): implicit Session => User = User(userId) def callWebServer(user: User): implicit DBConnection => Data = Data(s"About user ${user.id}") // Note: You could of course utilize the session and connection as implicits to a database library def getUserFromDB(id: String): implicit (Session, DBConnection) => Data = callWebServer(callDatabase(userId)) val user1: User = getUserFromDB("1") // compiles only if implicit session and connection in scope

Functional Programming with Kleisli versus Implicits

Functional programming in Scala has always had to overcome the Object-Oriented features of Scala and its full Java interoperability. Many of the mechanisms for doing this are built on top of higher-kinded types. For example, functionality associated with functors can be encoded by defining a trait Functor[F[_]] in which the F[_] is some generic type constructor. You can impliment the trait Functor with something like Option which takes a type T and produces a type Option[T] .

import scala.language.higherKinds trait Functor[F[_]] { def map[A, B](a: F[A], fn: A => B): F[B] } object Functor { implicit object FunctorOption extends Functor[Option] { override def map[A, B](a: Option[A], fn: A => B): Option[B] = a.map(fn) } implicit object FunctorSeq extends Functor[Seq] { override def map[A, B](a: Seq[A], fn: A => B): Seq[B] = a.map(fn) } }

You can now abstract over the capability to call .map on any type, even if the types share no superclass and come from separate artifacts:

// applies implicit conversion from Numeric[A] to Numeric.Ops which defines the + method def addOne[F[_]: Functor, A: Numeric](f: F[A]): F[A] = f.map(_ + 1) addOne(Some(1)) // Some(2) addOne(None: Option[Int]) // None addOne(Seq(1, 2)) // Seq(2, 3) addOne(Seq.empty[Int]) // Seq()

This powerful abstraction mechanism is the basis for much of Cats and Scalaz. Higher kinded types and implicit derivation of typeclasses is able to solve almost any issue, but at times the solutions require some unintended complexity.

For example, the ReaderT monad (aka Kleisli ) is often used for injecting dependencies at the points they are needed to computer some result or it’s used for representing that calling a certain function requires performing some effect or state change that you want to capture and propagate to the surface to customize when and how it is performed. This blog post demonstrates some of the options for dependency injection in Scala (pre-Dotty) and explains why you should adopt the Reader monad pattern.

The problem with using the ReaderT monad is that monads are best at representing sequential transformations where each step depends on the result of the previous. When it comes to injecting dependencies or representing capability to perform certain effects, the order of how these things are provided is often irrelevant. When making changes to your code, especially when combining effects and dependency requirements using product types, you have to maintain the correct order of these types for your code to compile. Failure to do so can cause compiler errors and if you aren’t careful, it can affect binary compatibility.

For example, let’s take another tour of dependency injection in Scala 3 using some familiar types from above with the addition of the Reader monad, also known as a Kleisli , where the M is fixed to a concrete type B :

final case class Kleisli[M[_], A, B](run: A => M[B]) type ReaderT[M[_], A, B] = Kleisli[M, A, B] type Reader[A, B] = ReaderT[Id, A, B] type Id[A] = A // therefore def r: Reader[A, B] = ??? r.run // A => B

The fact that a Reader can be abstracted into a higher-kinded ReaderT or Kleisli is important for when you want to combine multiple Readers, but I’m leaving that out for now to demonstrate the simple case.

// classic constructor injection // - doesn't allow customization between method calls // - executes operation immediately upon method invocation class UserRepo(db: DB) { def add(user: User)(implicit s: Session): Future[Unit] = { implicit val c = db.getConnection() // use connection } }

We can get a lot done with the classic pattern. It handles creating connections and you can provide a stubbed implementation of the DB for testing purposes. The problem is that it assumes you want a new connection each time, and if you want to stub out the connection you have to stub out the DB first. You can avoid this by adding the connection as an implicit as well.

// implicits parameters // - all dependencies injected at call site // - executes operation immediately upon method invocation object UserRepo { def add(user: User)(implicit c: DBConnection, s: Session): Future[Unit] = { // use connection + session explicitly or implicitly } }

The problem here (as acknowledged by the blog post linked above) is that these implicit parameter lists are cumbersome to write and there is no way to use type aliases or any other form of abstraction to cut down on the boilerplate (in Scala 2.x – that is). Additionally, you have no control over when this method starts the operation.

Let’s take a look at the Reader monad option.

// reader monad injection // - all dependencies injected at call site // - returns a function to be called later // - order of dependencies must be explicitly handled correctly by caller object UserRepo { import scalaz.Reader // should it be (DBConnection, Session) or (Session, DBConnection)? // this matters for the caller because they are probably flatMapping over a monad // and if the two don't line up, it won't compile. def add(user: User): Reader[(DBConnection, Session), Future[Unit]] = { Reader { case (conn, session) => // use connection + session explicitly implicit def c = conn implicit def s = session // or implicitly } } // if you have many methods with similar signatures, // you can simplify things with a simple type alias // NOTICE: This must have the same type as UserRepo.add type DBOp[R] = Reader[(DBConnection, Session), Future[R]] def findById(id: Int): DBOp[Option[User]] = { Reader { case (conn, session) => // use connection + session explicitly implicit def c = conn implicit def s = session // or implicitly } } } val act = Reader[(DBConnection, Session)] { args => for { _ <- UserRepo.add(User(1)).run(args) u <- UserRepo.findById(1).run(args) } yield { println(s"Found $u") } } act.run((db.getConnection(), session)) act.run((session, db.getConnection())) // doesn't compile

This pattern works well when the number of dependencies is low and you don’t need to combine dependencies from too many different places. It only requires paying the cost of writing everything inside of for-yield comprehensions or flatMaps . The biggest pain point is that the order of dependencies must be meticulously managed and it is not easy to ignore these details from the caller’s point of view.

Let’s look at what Scala 3 gives us:

// implicit functions // - all dependencies injected at call site // - executes immediately OR returns a function to be called later (similar to call-by-name) // depending on the return type expected // - order of dependencies doesn't need to be handled by caller object UserRepo { // Doesn't matter if we return (DBConnection, Session) or (Session, DBConnection) // the compiler will handle crisscross arguments by type def add(user: User): implicit (DBConnection, Session) => Future[Unit] = { // use connection + session implicitly implicit (c, s) => // or explicitly (uncommon) ??? } // if you have many methods with similar signatures, // you can simplify things with a simple type alias // NOTICE: The argument order here is different than for UserRepo.add and that's okay type DBOp[R] = implicit (Session, DBConnection) => Future[R] def findById(id: Int): DBOp[Option[User]] = { // use connection + session implicitly implicit (s, c) => // or explicitly (uncommon) ??? } } // compiles because return type is explicitly specified as an implicit function def exec: implicit (DBConnection, Session, ExecutionContext) => Future[Unit] = for { _ <- UserRepo.add(User(1)) // order of above arguments (Session, DBConnection) doesn't matter u <- UserRepo.findById(1) // although the order of arguments is different than UserRepo.add, it doesn't complain } yield { println(s"Found $u") } // triggers implicit search for session and db connection because no explicit type is specified for result // and the compiler assumes that you want the result of executing exec with the implicit arguments def result = exec // does not compile // callLaterInClosure is a closure that now must be explicitly invoked val callLaterInClosure: () => Future[Unit] = () => { import ExecutionContext.Implicits.global implicit val s: Session = new Session implicit val c: DBConnection = new DBConnection exec // triggers implicit search for session and db connection to pass to exec } callLaterInClosure() // requires no implicits to call

Conclusion

With ImplicitFunctions , we can see that a simpler, stabler, faster, and more flexible solution to this problem will soon be made available to Scala developers. The order of the implicits any of the methods – from the point at which they are injected to the point at which they are used – can be ignored by the caller. Downstream changes to the order need not have any effect on the code and thus binary compatibility of upstream callers. And last, but not least, the team at EPFL ran performance tests and found a 7x increase in performance when switching from ReaderT monads to implicit functions.

What are some of the other patterns that these new language features will unlock? Can implicit functions be used to simplify other higher-kinded abstractions?

Scala is often seen as a stepping stone from Object Oriented programming to Function Programming, where Haskell is seen as the destination. If this were the case, why even use Scala? You could just forgo the OO stuff and leap past all the inconsistencies and unsafe baggage carried over from Java and go straight to Haskell.

I think Scala is showing new ways to combine these paradigms and, I would argue, reimagining the paradigms themselves. Scala is merging the world of stateless functions attached to modules with depedency capturing classes and stateful objects using implicits to allow library developers to smooth out the syntax. With Scala 3, we could start to see new patterns that utilize these syntactical features to write code that combines the binary compatibility and information hiding of OOP with the stateless and higher-order functional patterns of FP in ways that are more performant and less complex than their Haskell looking counterparts.

Scala 3 is shaping up to be a much simpler language than Scala 2 by removing the warts and allowing people to do more of what they ultimately want, without having to jump through hoops to do it. I hope this inspires you to think about how you can use this new language feature and others to write more functional code without paying too heavy a price on the learning curve for higher-order abstraction.