In a legacy project I work on, for performance reasons — mainly IDE-related — we keep the slick’s schema generated code in a separate project. It is published as a jar file to a Nexus repository and added as a dependency to other modules. Before automation, publishing the artifact was like running on a mine field. Let’s see what changes I introduced.

The problem

Before the optimisation, generating a new schema and publishing it to a repository, required the following steps:

Write a migration (we use Flyway — which is an excellent tool — for DB versioning) in plain SQL

Uncomment the block of code in the build.sbt script responsible for applying migrations to the DB schema

Start a local DB — using docker

Run sbt tasks: clean followed by flywayMigrate

followed by Uncomment the dependencies used for source code generation

Add a SchemaGenerator application class that will generate the code based on the latest schema version

Run the application added in the previous step

Stash the changes that affected build.sbt and the code (non related to the DB model)

and the code (non related to the DB model) Copy file from: out/org/opal/db/model/Tables.scala to the package org.opal.db.model

to the package Publish a new version

Phew… Do I really have to explain why this is an error prone process which, in addition, is far from the standards that we have for development and devops in the 21st century? Maybe, just three: commenting/uncommenting, stashing and popping, being scared all the time…

The idea

Despite the fact that my first thoughts were like: #$%#$^#$@, I had some plan:

There’s no need to comment/uncomment the plugins — I’ll leave them in the script

Adding and removing code generator is problematic — I’ll add it permanently and exclude from the published jar file

The same applies for the dependencies applied for schema code generation — I’ll leave them in the code and exclude from the POM manifest (since I don’t want these dependencies to become transitive)

Also, running the SchemaGenerator manually might be tricky — is there any way to run a class via a task defined in build.sbt ?

manually might be tricky — is there any way to run a class via a task defined in ? Copying after generation? Please… You can generate directly to an appropriate package

This idea was like providing a totally new quality, but after a while I thought: “I still can do better”.

The solution

Now, instead of commenting/uncommenting I’ve added the following piece of code:

With it, I even have the possibility to change the DB credentials without modifying the script itself — OCP, sounds familiar?

When it comes to adding SchemaGenerator , then excluding it, adding dependencies which will not be published — thanks to pomPostProcessor — and.. What else do I need to automate artefact publishing? Instead of all this code I can use just a single plugin: slick codegen and configure it:

You may find establishing a dependency that will run code generation before compilation of the sources useful. You can do this with single line, by adding sourceGenerators in Compile += slickCodegen in your build.sbt . In my particular case, I didn’t need this feature. I have a slightly different flow: first apply migration, then code generation and commit. Artifact is published on the CI server, so no compilation is done locally.

Recap

No code to maintain, clean build.sbt , no redundant dependencies, no steps to remember, no fear — finally! Just two simple steps:

start docker

run sbt tasks

Automate all the things! Do you know any other plugins that help to automate your day-to-day work? Share them!

The sample project can be found here.