It’s a best practice to bypass logging during unit tests. Since the tests will be run by a computer, the output is wasted I/O, and if you needed log output to understand why a test failed, that should be a sign that a new test needs to be written. Logging tests can even make debugging harder by hiding failures in a sea of unrelated messages. (Anyone who has unit tested within a SparkContext has experienced the pain of this last point.)

Ideally, then, you’d suppress logging that the application attempts to make during test runs. But how? And how do we do it in the contexts of “library” and “application”?

Using the slf4j-nop JAR

If your project is set up to use SLF4J as a facade to the underlying logging system, you’re most of the way there. SLF4J’s promise is that it provides a consistent API, but runtime behavior is driven by the relevant JAR detected on the classpath. One of the implementations available is the NOP which will just swallow any messages sent to it. The challenge is configuring the classpath correctly.

As a library

This is the easier case:

1 2 3 4 5 6 7 8 val versions = Map ( 'slf4j -> "1.7.6" ) libraryDependencies ++= Seq ( "org.slf4j" % "slf4j-api" % versions ( 'slf4j) % "provided" , "org.slf4j" % "slf4j-nop" % versions ( 'slf4j) % "test" )

Here, we specify: compile against slf4j-api , but expect the API’s implementation to be provided by another dependency during runtime. During the test run we use the slf4j-nop implementation. We don’t specify an implementation dependency to use outside of tests; that becomes a transitive dependency that users of the library pick up. If we did specify the implementation, that would dictate a configuration choice up to the application, rather than allowing that application to make its own choice (without having to exclude ours).

Also note that the versions Map is just a convenience that makes it so we don’t have to repeat the version number. The same functionality can be achieved by having the version duplicated on each dependency.

As an application

The initial set up is similar, but in this case we specify a final implementation to use at runtime. Let’s say we go with Log4J 1.2:

1 2 3 4 5 6 7 8 9 val versions = Map ( 'slf4j -> "1.7.6" ) libraryDependencies ++= Seq ( "org.slf4j" % "slf4j-log4j12" % versions ( 'slf4j) % "runtime" , "org.slf4j" % "slf4j-api" % versions ( 'slf4j) % "provided" , "org.slf4j" % "slf4j-nop" % versions ( 'slf4j) % "test" )

We not quite done yet. Unfortunately, dependencies specified in the runtime configuration are also picked up in the test configuration so during the unit test run, both implementations are on the classpath. In this case, SLF4J makes an arbitrary choice (which can change from run to run or as versions change). To ensure that the NOP implementation is always selected, we actively remove the Log4J implementation during tests:

1 2 3 ( dependencyClasspath in Test ) <<= ( dependencyClasspath in Test ). map ( _ . filterNot ( _ . data . name . contains ( "slf4j-log4j12" )) )

Beyond unit tests

In integration or acceptance testing, we don’t expect to hit all edge cases, and the scenarios are longer, so logs are more useful for debugging. In this case, the application context is easier, since it can just not remove the dependency from the new test configurations. In the case of a library, though, it’s fairly straightforward since you can just bind the desired library to the new test configuration:

1 "org.slf4j" % "slf4j-log4j12" % versions ( 'slf4j) % "it"

Alternatives

Provide a test configuration file

Pros:

It may be easier to reason about the configuration since it’d be a mirror of the main configuration file.

Cons:

Not all underlying implementations support looking for an alternate test configuration which will take priority over the main configuration.

The configuration is based on the underlying implementation rather than SLF4J itself, so switching underlying implementations would require a new test configuration.

Disable logging programmatically

Cons: