This is the third part of my Playing with Play in Java series. Today I will look at database connectivity with the framework. The service I have created till now uses standard java.util.HashMap instance as a data repository. Such solution is rather far from the usable one so I am going to check how this can be done with a SQL database.

Connect

It is quite easy to connect to a database with Play. Lightbend provides dedicated JDBC plugin that can be easily added to the project:

libraryDependencies += javaJdbc

This instruction adds three jars to the project’s classpath: play-jdbc-api, play-java-jdbc and play-jdbc. The first defines API implemented by the two latter ones, in Java and Scala respectively. What confuses me is the presence of Scala implementation on the classpath. Probably this is because of some internal dependencies between these jars.

With JDBC plugin on the classpath, now I can specify details of a database connection in the application.conf file. A single connection is defined in terms of a datasource. A datasource has its name and a couple of standard properties like the following ones. I think names are self-explanatory so I will not describe them. The {datasource-name} is just a placeholder for a name of a datasource.

{datasource-name}.driver = org.postgresql.Driver {datasource-name}.url = "jdbc:postgresql://localhost/todos" {datasource-name}.username = todos {datasource-name}.password = todos {datasource-name}.logSql = false

The JDBC driver for H2 database is on the classpath already, added by default by Play. When using some other database, you have to add a proper dependency to the project. If you do not then on application startup you will get java.lang.ClassNotFoundException . I added PostgreSQL drivers to the project since I want to connect to this database.

HikariCP is the connection pool implementation used by Play. It is provided by play-hikaricp. Of course, you can use some other library for your project. The reference.conf file of play-jdbc plugin contains all possible settings of a connection pool.

Schema

When it comes to using a SQL database in some project, a crucial thing is the management of a database schema. With this, we can easily track changes made to the structure of a database like adding or deleting tables, their modifications and so on. Play Framework offers a solution in this matter as well. We can use one of two plugins providing a schema management:

I am going to focus on the first one since it is provided with the framework already.

To enable Evolutions plugin in a project we have to add the following predefined Play module to dependencies in build.sbt :

libraryDependencies += evolutions

This results with play-jdbc-evolutions JAR on project’s classpath. When Play finds scripts in configuration folder, it automatically applies them to the database. To turn this behaviour off, you would need to set false as the value of play.evolutions.enabled property in configuration (when running tests for example).

So, the scripts. The standard place where Evolutions scripts are stored is conf/evolutions/{datasource-name} folder. The convention of naming files with scripts assumes the first file is named 1.sql and every next one has the number part incremented by one ( 2.sql , 3.sql , …).

The Evolutions scripts contain two sections inside. Ups for updates to be made and Downs specifying rollback behaviour if something goes wrong. Here is the content of a file with DDL of todos table:

# --- Creation of todos table # --- !Ups create table "todos" ( "id" TEXT NOT NULL UNIQUE PRIMARY KEY, "name" TEXT NOT NULL, "status" TEXT NOT NULL, "dueDate" TIMESTAMP NOT NULL ); # --- !Downs drop table "todos";

The last thing, before starting the application with Evolutions, is to tell the framework which schema all changes applies to:

play.evolutions { db.todos.schema = public }

When started for the first time the plugin detects automatically whether play_evolutions table is in a database and, if no, creates it. The table contains information about scripts already applied to a target database.

Moreover, Evolutions checks the content of the scripts folder with entries in play_evolutions table. Because of this, it can say that some changes have to be applied to a database. In dev mode such information is provided after calling some action or endpoint and is presented in two ways – in logs of the application and in a browser if an endpoint was called from there. The examples you can find below:

[error] application - [info] [info] ! @71llhmccb - Internal server error, for (GET) [/todos] -> [info] [info] play.api.db.evolutions.InvalidDatabaseRevision: Database 'todos' needs evolution![An SQL script need to be run on your database.]

For the prod mode, the error is presented in logs during the startup. In contrast to development mode, the application will not start here. You will be notified about this in a similar way as below. I omit the stacktrace below for better readability.

Oops, cannot start the server. @71lljangp: Database 'todos' needs evolution!

This behaviour is the result of disabled by default automatic application of Evolutions scripts to a database. It can be changed with setting play.evolutions.db.{datasource-name}.autoApply property to true .

If you are interested in more details about the plugin, check its official documentation on Play website.

Query

It seems all pieces of a database configuration are in place. How to run a query then? With javaJdbc plugin enabled, the access to a database is possible through JDBC API. Let us have a look what needs to be done to store a single TodoItem .

Play creates an instance of play.db.Database interface for every datasource defined in application.conf file. All of them are available for dependency injection. When it comes to injecting a default datasource – the one configured with a default name (it is predefined by Play Framework as default but it can be overwritten in the configuration file) – you have to use javax.inject.Inject annotation on a constructor. Here is the example:

@Inject public TodoItemSqlRepository(Database db) { this.db = db; }

The code is only slightly different for a named datasource. A datasource is treated as a named one if it has some other name than the default one. You have to use play.db.NamedDatabase annotation on a constructor argument to specify which datasource exactly should be injected. Below, you can see the example of using todos datasource from the todo project:

@Inject public TodoItemSqlRepository(@NamedDatabase("todos") Database db) { this.db = db; }

Now, I can finally write the logic storing a new todo item in the database. At first, an instance of a connection would be needed. With this in hand, the next step is to prepare a javax.sql.Statement (in the example below this is the PreparedStatement class) since all queries in JDBC world are executed this way. After filling it with the new item’s data I can run the query. As a result of the call is an identifier of the created item.

When it comes to fetching an item from the database, it is very similar to the process described above. The distinction here are two. I have an identifier of a single item in hand and this one has to be added to a statement. The result of the query is an item data.

Here is the code that implements all the steps of these two actions:

public Option<TodoItemId> saveItem(TodoItem item) { return Try.of(() -> insertNewItem(item)) .map(id -> new TodoItemId(id)) .onSuccess(stmt -> LOGGER.info("New todo item stored: {} {}", item.getId(), item.getName())) .onFailure(ex -> LOGGER.error("Cannot store todo item: {}", item.getName(), ex)) .getOption(); } public Option<TodoItem> findItem(String id) { return Try.of(() -> fetchItem(id)) .onSuccess(item -> LOGGER.info("Todo item found: {}", item)) .onFailure(ex -> LOGGER.error("Cannot find todo item with id: {}", id, ex)) .getOption(); } private String insertNewItem(TodoItem item) throws SQLException { try(Connection conn = db.getConnection()) { try (final PreparedStatement stmt = conn.prepareStatement(INSERT_TODO_ITEM)) { stmt.setString(1, item.getId()); stmt.setString(2, item.getName()); stmt.setString(3, item.getStatus().name()); stmt.setTimestamp(4, asTimestamp(item.getDueDate())); stmt.executeQuery(); } } return item.getId(); } private TodoItem fetchItem(String id) throws SQLException { TodoItem item; try (Connection conn = db.getConnection()) { try (final PreparedStatement stmt = conn.prepareStatement(FIND_BY_ID)) { stmt.setNString(1, id); try (final ResultSet result = stmt.executeQuery()) { String status = result.getString("status"); String name = result.getString("name"); Timestamp dueDate = result.getTimestamp("dueDate"); item = new TodoItem(id, name, asDate(dueDate), TodoItemStatus.valueOf(status)); } } } return item; }

I do not want to implement the remaining API of the SQL repository. After writing saving and fetching actions, I have to say that working with JDBC API in Java is not the coolest thing we could do. Even with a significant dose of Javaslang support. This is why, in the next part of the series, I will look at object relational mapping possibilities in Play world.

Tests

But before you will go – what about test then? How can I setup the test environment for testing whether the SQL repository actually works?

With existing RoutesTest class I could easily provide test configuration parameters manually when the application is being created. Here is the overridden provideApplication() method:

@Override protected Application provideApplication() { final Application build = new GuiceApplicationBuilder() .configure("db.todos", ImmutableMap.of( "url", "jdbc:h2:mem:todos;MODE=PostgreSQL;DATABASE_TO_UPPER=false", "driver", "org.h2.Driver", "username", "sa", "password", "" )) .configure("play.evolutions.db.todos.schema", "") .build(); return build; }

The other way to provide a test configuration is to place it in a configuration file. In this file, we can override production settings with values required in tests. Next, it can be loaded as an instance of play.Configuration to a test application. The following snippets present the content of a file and how it can be loaded for testing, respectively:

include "application.conf" play.evolutions { db.todos.schema = "" } db { todos.driver = org.h2.Driver todos.url = "jdbc:h2:mem:todos;MODE=PostgreSQL;DATABASE_TO_UPPER=false" todos.username = "sa" todos.password = "" }

@Override protected Application provideApplication() { final Application build = new GuiceApplicationBuilder() .loadConfig(new Configuration(ConfigFactory.load("application-test.conf"))) .build(); return build; }

It is possible to test TodoItemSqlRepository alone of course. To do this we have to provide on our own some database instance to the repository’s constructor. Two things are important here, though. Firstly, we have to take care of shutting down a database. Secondly, Evolutions scripts have to be applied and cleaned up manually too. Here are the examples of these for JUnit’s tests:

@Before public void setup() { db = Databases.inMemory("todos", "jdbc:h2:mem:todos;MODE=PostgreSQL;DATABASE_TO_UPPER=false", ImmutableMap.of()); Evolutions.applyEvolutions(db); repository = new TodoItemSqlRepository(db); } @After public void cleanup() { Evolutions.cleanupEvolutions(db); db.shutdown(); }

More on testing an application using a database you can find here.

At the end

I have to say the addition of a database was a really nice exercise for me. Changing the application code was a quite simple thing. However, I had moments when I struggled with the configuration of tests so a database is properly set up.

Applying Javaslang idioms to JDBC API was interesting as well but not as enjoyable I could expect. This is why I left the repository API not fully implemented for SQL at the moment. As I mentioned above, I am going to check integration of the Play Framework with some ORM in the next part of the series. I hope this will bring more fun than playing with JDBC. 🙂

You can find the updated code on GitHub, as usually.