Photo by Willfried Wende on Pixbay

So, you’re working on a project with domain objects flying around, storing them somewhere, fetching them using identifiers, and handling their data.

Maybe you have read my previous article about the forgotten value of Value Objects. There, I have presented why it is so beneficial to use them in code.

One such case is an identifier of some object. While it is very straightforward to use VO in such a situation, it is still somehow artificial/awkward to manually un-/wrap data of such an object to write it to or read it from an external system.

Use case

I had a similar scenario in one of my recent projects. Part of it was an implementation of a CRUD functionality of users, represented by the following class:

As you can see, there are two fields of the same type: UUID . Not to confuse them when querying for data, we wrapped them with dedicated VOs: UserId and ApplicationId . Hence, our repository methods instead of looking like this:

look like the following:

The gain in readability on a repository class level is quite significant, and we got rid of a potential colossal bug that could only appear in runtime.

However, we still had to manually take the data from VO instances out before applying them to a database query inside the repository methods. With every query constructed this way, the task became more and more mundane, and we knew we had to do something with it.

How could we improve the overall “programmer experience” using VOs even more? How to set a database, so it knows how to deal with those types?

Let’s tweak it

The project I’m referencing is a Java project based on Spring Boot and uses Cassandra to hold our data. We are managing the database connectivity with Spring Data Cassandra and Datastax driver.

Spring Data

Repositories defined with Spring Data are pretty simple interfaces, and writing default methods with mapping data inside was not an option for us.

The starting point was the tweaking configuration of Cassandra driver by extending its base class: AbstractCassandraConfiguration . One of the methods defined there is CustomConversions customConversions() that allows providing a list of custom Spring’s read/write converters, like the following:

The base class of a converter is Converter interface parameterized with two, source and target types. We have to provide two converters — one for transforming data of VO into database format and the second one for converting data from a database into an instance of VO. Additionally, we can mark them with @WritingConverter or @ReadingConverter , respectively.

Having such configuration, we can call repository methods using VOs with no manual un-/wrapping required. That’s a real boost of developer experience and productivity.

Datastax driver

However, that’s not the end of tweaks. If we are running some queries with Async/-CassandraOperations and CQL, the above is not enough. The converters don’t work with this approach. Thus, we have to go deeper, to the Datastax driver and its codecs registry.

Codec has the same role as Spring’s converter — it transforms data from one type into another. We can access the registry from the configuration class, by overriding ClusterBuilderConfigurer getClusterBuilderConfigurer method from the AbstractCassandraConfiguration . Inside the method, we can configure a builder of a client connecting to the Cassandra cluster.

The base class of codecs is TypeCodec class, parameterized with a single type only. It is the custom type we’d like to support. A codec provides a logic transforming the type to/from ByteBuffer . In our case, the type would be UserId or ApplicationId :

After registration, we can run CQL queries using our custom types directly. Below you can find an example query for a session token TTL:

Wrap up

While I have presented a use case based on Cassandra, you can easily find other places where using similar wrappers is viable. For REST endpoints in Spring, you can use Jackson’s custom de-/serializers. In terms of SQL and jOOQ, we have force types mechanism available.

By applying custom de-/serializers to our value objects, we gain way more readable and type-safe code. Additionally, we can remove a lot of code responsible for data transformation between our domain and external systems. For such outcomes, it is worth to write a bunch of additional small classes. The effort pays off.

Do you have a similar experience? Or maybe totally different? :) Let us know your stories and leave a comment.