When automated test are running, they are either running on your own machine (when you write them or run them to check something), or in your CI.

When you run the test on your machine, if there are failures, it might be easy for you to look at what is running (if you have some visual tests, that interact with either browsers or apps on your machine). You can just rerun a failed test and visually inspect for failure reasons. But, if tests are running on a CI machine, visual inspection is either very difficult or even impossible. You might not have access to connect to that machine, or to see how tests are being run.

When you have a failure, the first thing you will want to do is try to reproduce the failure, to understand what caused it. Many times you want to do that manually. Or you just want to understand the data that was used during test execution. Without proper output from the tests, you can see at what line of code the test failed, but it is more difficult to understand the context in which it failed.

Therefore, proper console output about the state of the test run is advisable. Although developers would argue that writing plenty of System.outs is messy, that you should not write messages to the console, in case of running automated tests it can become very useful and helpful.

When would you want to write something to the console within the test? Here are just some examples.

Note: Below i might say “print”, “write to the console” or similar, but in fact they all mean to write a System.out line of code. Of course you might choose to use another means of writing to the console, possibly with the help of an external library. But you get the point. The result should be relevant text in the console. Also, whenever a failure occurs inside an assertEquals statement, the console output is not needed, as both the actual and expected values are displayed when the failure occurs. Think about the need for output more like when assertTrue or waits are used, for example. When no other clues about test failure are available, then you should throw some yourself.

Whenever you are using randomly generated data: print it so that you can understand what values were used in the tests. Since random data is, well, different every time a test runs, knowing exactly the values generated for the particular test run that failed can you help reproduce the full scenario the test was covering.

When your test expects for a certain url to be loaded in the browser, but another one is actually loaded. You need to be aware what url really loaded, so that you understand if an error was thrown or if the initial page did not trigger the correct event that would cause a new page to load.

When you expect for an image source to be a specific one, but it isn’t. Print the currently configured image source.

When you expected the button you wanted to click on to have a certain label, but it has a different one. Print the label to understand whether for some reason the selector behind it is the one you thought, but in fact it has a different purpose.

Before a more complex step is being performed, you might want to type something like: “starting this very complex step”. Just so you know the test is not hanging, if the step takes a long time to be processed. Many times when you run a test in the CI, you will just look at the CI output but you will not see anything, and you won’t be aware that the complex step is in progress. You will have no clue about what step the test has reached. Also, when it finished executing, you could write a “this complex step finished successfully” message.

If you select a random value from a dropdown in each test: know which value was selected by printing it.

Well, these are just some examples. But the point is: whenever you feel some of the data being used in the tests would be useful to whoever is looking at the results, print them to the console. Just make sure you don’t pollute the test output with information that is not relevant or useful.