Update: Added Squeryl, thanks to @JoeZulli for the heads-up

Update 2: Added link to latest Squeryl release for Scala 2.10 and 2.9.2

I recently started working with relational databases again from within Scala, after a while in JSON-land (mostly MongoDB and elasticsearch). During that time, I’d been happily working with lightweight libraries / DSLs that take care of case class serialization, such as the impeccable Salat and elastic4s.

So I started looking for a tool that would let me achieve the following:

library optimized for Scala (easy to add to an SBT workflow, ideally with a wrapper for the Play framework)

easy mapping to and from case classes

control over the resulting SQL

ability to take care of CRUD operations without too much dialogue

ability to build finders quickly (note: I’m talking about down-to-earth, readable finders, such as findById and findByEmail , not about monstruosities such as findByAgeBetweenEighteenAndTwentySevenAndWithColorBlue – I don’t really know where this comes from, but I’ve seen it, and it’s a terrible, terrible idea)

and , not about monstruosities such as – I don’t really know where this comes from, but I’ve seen it, and it’s a terrible, terrible idea) last but not least: a certain level of maturity, and perspective that it will be maintained in the future (unlike e.g. the once promising Cricumflex ORM which appears to be no longer maintained)

The trouble seems to be that just like for JSON libraries or HTTP Client libraries, it seems to be quite fashionable in Scala-land to roll your own Scala library for relational database access, each of those implementations having their own set of advantages and disadvantages.

I’ve spent some time looking around, and here are my findings (disclaimer: this is a quick tour, if you want an in-depth analysis, I suggest you go and try the libraries out for yourself). Also, sorry if I forgot a library, and please let me know so.

In alphabetical order:

Activate

The Activate framework is a framework that acts as a wrapper around a number of persistance technologies, including JDBC and MongoDB. What I found perculiarily interesting about its design is the use of Software transactional memory (or STM), which helps overcome tricky problems, such as concurrent bank transactions.

It rolls its own query language (but is compatible with Slick queries), has a Play integration and a solid documentation from which it is easy to get started.

Inserting:

transactional { val person = new Person("John", "Lennon", "john@lennon.org") } 1 2 3 4 transactional { val person = new Person ( "John" , "Lennon" , [email protected]" ) }

Mapping:

class Person( var name: String, var surname: String, var email: String) extends Entity 1 2 3 4 5 class Person ( var name : String , var surname : String , var email : String ) extends Entity

Querying:

select[Person] where(_.name :== "John", _.lastName :== "john@lennon.org") 1 2 select [ Person ] where ( _ . name : == "John" , _ . lastName : == [email protected]" )

(it’s the extends Entity part which does most of the magic, as you may have guessed, and as explained here).

The development is very active.

Anorm

Anorm rocked the boat a while back by declaring that ORMs are a bad idea (it’s been done before) and proposing an alternative that goes back to the roots, really quite close to them in fact, by letting you write your own SQL (and not only a little, but a lot). One point of the argumentation that I think makes definitely sense is to say that going through the burden of abstracting both mappings and query langauge to be “database independent” may sound sexy in theory, but in practive, I’ve never once switched the underlying DBMS once a project started – on the other hand I’ve very often had to make use of specific features of a DBMS, and then any kind of abstraction layer was very much in the way of what I was trying to do.

At the moment, Anorm is still shiped as part of Play, but this will change, i.e. Anorm should get its own repository and the default for the upcoming Play release be Slick. I haven’t tried, but I think that it already is possible to use Anorm as a separate dependency – at least, the sources are separate.

So how does it look like? As promised, it involves some SQL:

Inserting:

val id: Option[Long] = SQL("insert into City(name, country) values ({name}, {country})"). on('name -> "Cambridge", 'country -> "New Zealand"). executeInsert() 1 2 3 4 val id : Option [ Long ] = SQL ( "insert into City(name, country) values ({name}, {country})" ) . on ( 'name -> "Cambridge", ' country -> "New Zealand" ) . executeInsert ( )

Mapping:

val mapping = { get[String]("name") ~ get[String("country")] map { case name ~ country => Country(name, country) } } 1 2 3 4 5 6 7 8 val mapping = { get [ String ] ( "name" ) ~ get [ String ( "country" ) ] map { case name ~ country = > Country ( name , country ) } }

Querying:

def allCountries(): List[Country] = SQL("select * from country").as(mapping *) 1 2 3 def allCountries ( ) : List [ Country ] = SQL ( "select * from country" ) . as ( mapping * )

As you see, this is rather verbose (ok, maybe it’s not too visible here, but take a table with 20 columns, and you’ll see what I’m talking about). The good news is that there’s already a solution to some of the verbosity which makes use of string interpolation.

Also, the tool itself can easily be extended to support custom types and the like.

ScalikeJDBC

A library I didn’t know about previously is ScalikeJDBC, which very much in the fashion of Anorm is close to SQL. An interesting aspect is that it adds support for non-blocking access to databases in the JDBC fashion (for the time being, only PostgreSQL and MySQL are supported). It has a plugin for Play, code generation, a type-safe query DSL but also [plain 1=”with” 2=”string” 3=”interpolation” language=”SQL”]/plain.

Let’s have a look:

Inserting:

Seq("Alice", "Bob", "Chris") foreach { name => sql"insert into members (name, created_at) values (${name}, current_timestamp)".update.apply() } 1 2 3 4 Seq ( "Alice" , "Bob" , "Chris" ) foreach { name = > sql "insert into members (name, created_at) values (${name}, current_timestamp)" . update . apply ( ) }

Mapping:

import org.joda.time._ case class Member(id: Long, name: Option[String], createdAt: DateTime) object Member extends SQLSyntaxSupport[Member] { override val tableName = "members" def apply(rs: WrappedResultSet) = new Member( rs.long("id"), rs.stringOpt("name"), rs.dateTime("created_at")) } 1 2 3 4 5 6 7 8 import org . joda . time . _ case class Member ( id : Long , name : Option [ String ] , createdAt : DateTime ) object Member extends SQLSyntaxSupport [ Member ] { override val tableName = "members" def apply ( rs : WrappedResultSet ) = new Member ( rs . long ( "id" ) , rs . stringOpt ( "name" ) , rs . dateTime ( "created_at" ) ) }

Querying:

val members: List[Member] = sql"select * from members".map(rs => Member(rs)).list.apply() 1 2 val members : List [ Member ] = sql "select * from members" . map ( rs = > Member ( rs ) ) . list . apply ( )

Once you get over the use of apply() , the way of working is very straightforward.

The project is under active development and has a solid documentation.

Slick

Backed by Typesafe, Slick embraces functional programming as a means to access relational data. Very much like LINQ it integrates directly with the language, letting you write idiomatic (or quasi-idiomatic) Scala to get your data.

At the time of writing this post, Slick 2.0 just got released, bringing in some new features such as a code generator, an improved means of mapping to the * projection (which lets you map to case class hierarchies), a way to work around the 22 field limit of Scala’s case classes by means of HLists and more.

Inserting:

DB.withSession { implicit s => val p = Person("John", "Doe") Persons.persons.insert(p) } 1 2 3 4 5 DB . withSession { implicit s = > val p = Person ( "John" , "Doe" ) Persons . persons . insert ( p ) }

Mapping:

import scala.slick.driver.MySQLDriver.simple._ case class Person(id: Option[Long] = None, name: String, surname: String) class Persons(tag: Tag) extends Table[Person](tag, "person") { def id = column[Long]("id", O.PrimaryKey, O.AutoInc) def name = column[String]("name", O.NotNull) def surname = column[String]("surname", O.NotNull) def * = (id.?, name, surname) <> (Person.tupled, Person.unapply) } object Persons { lazy val persons = TableQuery[Persons] } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 import scala . slick . driver . MySQLDriver . simple . _ case class Person ( id : Option [ Long ] = None , name : String , surname : String ) class Persons ( tag : Tag ) extends Table [ Person ] ( tag , "person" ) { def id = column [ Long ] ( "id" , O . PrimaryKey , O . AutoInc ) def name = column [ String ] ( "name" , O . NotNull ) def surname = column [ String ] ( "surname" , O . NotNull ) def * = ( id . ? , name , surname ) <> ( Person . tupled , Person . unapply ) } object Persons { lazy val persons = TableQuery [ Persons ] }

Querying:

def findByName(name: String)(implicit s: Session): List[Person] = Persons.persons.where(_.name === name).list() 1 2 3 def findByName ( name : String ) ( implicit s : Session ) : List [ Person ] = Persons . persons . where ( _ . name === name ) . list ( )

As you can see, the mapping does take some effort. You’re really specifying a representation of a table, and can add a number of options on each column ( O.PrimaryKey , O.AutoInc , O.NotNull in the example above). Once you’re trough this burden, though, you’re ready to go and use the type-safe query DSL.

Needless to say, Slick is under active development. There’s a Play module for it which also brings integration with Play’s evolution mechanism and generates DDL code.

Squeryl

One library that completely slipped my radar is Squeryl. I had looked into Squeryl some time ago, but for some reason during my recent survey it didn’t show up in any of the search results. To what I understand, Squeryl was one of the first libaries to promote the type-safe query DSL paradigm – at least, the argumentation on the webiste seems to point in that direction.

Let’s have a closer look:

Inserting:

val herbyHancock = artists.insert(new Artist("Herby Hancock")) val ponchoSanchez = artists.insert(new Artist("Poncho Sanchez")) val theMeters = artists.insert(new Artist("The Meters")) 1 2 3 4 5 6 val herbyHancock = artists . insert ( new Artist ( "Herby Hancock" ) ) val ponchoSanchez = artists . insert ( new Artist ( "Poncho Sanchez" ) ) val theMeters = artists . insert ( new Artist ( "The Meters" ) )

Mapping:

class Author(val id: Long, val firstName: String, val lastName: String, val email: Option[String]) { def this() = this(0,"","",Some("")) } object Library extends Schema { val authors = table[Author]("AUTHORS") } 1 2 3 4 5 6 7 8 9 10 11 class Author ( val id : Long , val firstName : String , val lastName : String , val email : Option [ String ] ) { def this ( ) = this ( 0 , "" , "" , Some ( "" ) ) } object Library extends Schema { val authors = table [ Author ] ( "AUTHORS" ) }

Querying:

class Artist(val id: Long, val name:String) { def songs = from(MusicDb.songs)(s => where(s.artistId === id) select(s)) } 1 2 3 4 5 6 class Artist ( val id : Long , val name : String ) { def songs = from ( MusicDb . songs ) ( s = > where ( s . artistId === id ) select ( s ) ) }

The documentation looks very complete, and there’s an extensive test suite where to find more examples. On the development side of things, my impression is that the project is more in maintenance mode now, as development seems to have quieted down over the past year and there doesn’t seem to be a stable release for Scala 2.10 out yet . The latest stable release is 0.9.5-6 and is available both for Scala 2.10 and 2.9.2. Also, the discussion group is quite active.

Great, now which one to pick?

Now that we’ve looked around a bit, the obvious question seems to be: which one to use for your project – which one did you pick?

Well, as I found shortcomings to all of the tools outlined above, I decided to write my own persistance library, RelaScala, which leverages Macro annotations and Type Lambdas for a better development experience. Nah, I’m just messing with you.

I’ve worked with Anorm and just can’t get over the verbosity it brings, so I wouldn’t want to work with it anymore. If you’re a little less worried about the projects not being maintained one day, I think the approaches taken by Activate or ScalikeJDBC are pretty interesting (although perhaps Activate is a bit too much on the abstract side of things for my taste).

For the projects I’m working on currently, I’ve ended up using Slick because of the long-term view (actually, I’ve just recently migrated a project over from Anorm to Slick 2). It took me some time to decide to recommend Slick, because I had worked with it before, and it also can get rather verbose in its own way. Also, for my level of knowledge of Scala, I find the source code to somewhat hard to grasp. That being said, the ability to map to hierarchies of case-classes makes things a lot easier, and there’s always the option to write plain SQL and thus to get back in control.

At this point, I should mention Renato’s work on a generic DAO for Slick which I plan on using in the future.

Bonus: jOOQ

One library that I haven’t mentioned above is an upcoming star in the Java space: jOOQ. Unlike any of the libraries presented here or that I’ve seen around, jOOQ both provides a type-safe DSL for writing queries and does not abstract over SQL. In other words, it provides database-specific DSLs for writing queries. An interesting side-effect when using jOOQ in Scala is that the dots can be ommited, leading to the following code:

select ( T_BOOK.ID * T_BOOK.AUTHOR_ID, T_BOOK.ID + T_BOOK.AUTHOR_ID * 3 + 4, T_BOOK.TITLE || " abc" || " xy") from T_BOOK leftOuterJoin ( select (x.ID, x.YEAR_OF_BIRTH) from x limit 1 asTable x.getName() ) on T_BOOK.AUTHOR_ID === x.ID where (T_BOOK.ID <> 2) or (T_BOOK.TITLE in ("O Alquimista", "Brida")) fetch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 select ( T_BOOK . ID * T_BOOK . AUTHOR_ID , T_BOOK . ID + T_BOOK . AUTHOR_ID * 3 + 4 , T_BOOK . TITLE || " abc" || " xy" ) from T_BOOK leftOuterJoin ( select ( x . ID , x . YEAR_OF_BIRTH ) from x limit 1 asTable x . getName ( ) ) on T_BOOK . AUTHOR_ID === x . ID where ( T_BOOK . ID <> 2 ) or ( T_BOOK . TITLE in ( "O Alquimista" , "Brida" ) ) fetch

(yes, this is actual code, even if it looks very close to plain SQL – more about it here).

The reason I didn’t include jOOQ in the comparison above is because I was looking for a means to map to case classes out-of-the-box in a type-safe manner (and is type-safe in regards to mapping). Let’s take a quick tour nonetheless:

Inserting:

create. insertInto(AUTHOR, AUTHOR.ID, AUTHOR.FIRST_NAME, AUTHOR.LAST_NAME). values(100, "Hermann", "Hesse"); 1 2 3 4 create . insertInto ( AUTHOR , AUTHOR . ID , AUTHOR . FIRST_NAME , AUTHOR . LAST_NAME ) . values ( 100 , "Hermann" , "Hesse" ) ;

Mapping:

Let’s take a look at the JPA-annotation-based mapping:

// A JPA-annotated POJO class public class MyBook { @Column(name = "ID") public int myId; @Column(name = "TITLE") public String myTitle; } // The various "into()" methods allow for fetching records into your custom POJOs: MyBook myBook = create.select().from(BOOK).fetchAny().into(MyBook.class); List<MyBook> myBooks = create.select().from(BOOK).fetch().into(MyBook.class); List<MyBook> myBooks = create.select().from(BOOK).fetchInto(MyBook.class); 1 2 3 4 5 6 7 8 9 10 11 12 13 14 // A JPA-annotated POJO class public class MyBook { @ Column ( name = "ID" ) public int myId ; @ Column ( name = "TITLE" ) public String myTitle ; } // The various "into()" methods allow for fetching records into your custom POJOs: MyBook myBook = create . select ( ) . from ( BOOK ) . fetchAny ( ) . into ( MyBook . class ) ; List < MyBook > myBooks = create . select ( ) . from ( BOOK ) . fetch ( ) . into ( MyBook . class ) ; List < MyBook > myBooks = create . select ( ) . from ( BOOK ) . fetchInto ( MyBook . class ) ;

(more about mapping here)

Querying

create.selectFrom(BOOK) .where(BOOK.PUBLISHED_IN.eq(2011)) .orderBy(BOOK.TITLE) 1 2 3 4 create . selectFrom ( BOOK ) . where ( BOOK . PUBLISHED_IN . eq ( 2011 ) ) . orderBy ( BOOK . TITLE )

jOOQ has an astoundingly complete documentation, I’ve rarely seen a project with such a good documentation and so many examples. It also lets you roll your own mapper, which lets me think it is only a matter of time before someone will implement a Scala macro for mapping to case classes.

In my opinion, jOOQ is a nicely fresh and realistic alternative for RDBMS persistance in the Java and Scala space, especially if you’re not taking a quiet stroll in the flowerly fields of a new project, but are deep in the trenches with an existing database schema, dealing with queries of 500+ lines of SQL (I’m trying hard to avoid working on such projects, but they are very real).

Conclusion

I hope this tour was useful. If you have any question, just leave a comment, I’ll try my best to reply.