Groovy is an awesome helper when writing Apache JMeter™ performance scripts. If you don’t agree with that first sentence, this article will surely change your mind! Groovy scripts can solve almost all of our issues when JMeter’s default functionality is not enough. For example, when we need to save data to a file. This blog post will show a short and simple Groovy script that can make that happen.

But first of all, we should come up with an idea about what exactly we want to log into the file. We could give an example of a script that prints “Hello world!” to a file. But let’s try to find a more relevant real-life example.

Debugging JMeter Scripts

One of the best and most useful examples would be the logging of failed responses. It’s no secret that when you run your JMeter tests in non-GUI mode, sometimes you might feel a lack of debugging capabilities. If you have a response body assertion in the test that fails during execution, you only know the assertion is failed, but you don’t have any additional details. In this case, the most valuable resource to understand the reason for failing would be the response body. But JMeter doesn’t log the response body anywhere.

In JMeter’s defense, this was done on purpose. During performance script runs, we might generate thousands and millions of responses per one execution. Storing all the data might take up a lot of resources, which of course is very critical for performance script testing. But let’s imagine a different scenario.

During my performance testing career, I have seen many situations where applications had different sorts of heisenbugs, which were hard reproducible and extremely difficult to catch. In other words, there might be some issues that occur randomly and there is no clear set of steps that you need to perform to reproduce this bug so you can show it to someone else.

For example, I have seen bugs that might be reproducible only once per thousands of requests, and only during a very heavy load. In such a case, you cannot use the JMeter GUI mode, because when using GUI mode you cannot actually generate a heavy load on the system. However, once you run a script in non-GUI mode you don’t have enough debugging information to verify why your assertions fail so randomly. Have ever you encountered such situations in your performance testing experience? Then this article is definitely for you!

The main idea of our solution to catch such heisenbugs is to create a script with specific assertions, which should give us more information about the root cause. As we discussed, the most valuable resource to identify the reason of a bug is its response. That’s why we are going to catch this response. Since we would like to generate a heavy load through the execution script, we need to use non-GUI mode and write the responses to a file. But does this mean that we need to write the complete response to a file and search for the failed responses manually in this huge text file? Of course not. We will show you some tricks about how you can log only failed response.

It's worth mentioning that you can catch and log failed responses with the Simple Data Writer. But the solution we are going to cover gives you very powerful flexibility and allows you to catch issues according to functional and business logic (based on request type, response code, some request params and so on). You can also separate all of your findings and catch errors across different error files according to a some defined logic. This is also very valuable since sometimes you need to handle a huge amount of requests/responses, and that might be painful if you keep everything mixed in one result file.

How to Debug by Writing Responses to a File

But first things first. Let’s create a simple JMeter performance script for our example. We can use the blazedemo.com web application and create two different HTTP samplers with assertions on both of them. One will pass and another one will fail (of course on purpose to simulate a situation when we have a request that randomly failed while other requests passed successfully).

1. First of all, we need a Thread Group. For testing purposes, it’s enough to have just one user with 10 script iterations.

2. Let’s add a basic HTTP request to blazedemo.com:

3. Add assertions that should always pass for the specified HTTP request:

4. In addition to that, let’s add another request that is supposed to fail:

5. Add an assertion that will make this sampler fail:

6. Now, if you run our script, you should see a combination of failed and passed requests:

Just to remind you, the main goal is to write responses of failed requests to a text file. So let’s move on to a basic script that is supposed to write to the file.

In this example, we are going to use the Groovy scripting language. You can check out this great article which should give you an idea why we use Groovy in JMeter and why it is so cool. Also, it might be useful to go over this page to get a better idea of Groovy syntax. But if you are familiar with any programming language, it should be very straightforward for you to understand it.

We are going to write to the file in the case of a failed response. For these needs, we can use another assertion that will be placed in the main root of the Thread Group. As we agreed, we are going to use a Groovy script for that. You can add a custom Groovy assertion by using “Right click on the Thread Group -> Add -> Assertions -> JSR223 Assertion” and choose “groovy” as “Language” on the JSR223 Assertion screen:





Now we are ready to write some scripts!

We need to think where we are going to store this file with the responses. It is very handy when all the result files are created in the same directory or subdirectory, where the JMeter script is located. With a Groovy script you can achieve that by using 2 lines of code:

import org.apache.jmeter.services.FileServer; def path = FileServer. getFileServer (). getBaseDir ();

To avoid a mess it is better to put all the results in a new folder each time. Since we want to make a great performance script that can be run on any machine anywhere regardless of machine OS and script location, we need to ensure that:

We use a not fixed but relative path during file and folders creation

The script creates the required folders itself if they are absent

The script creates the folders in the same directory where it is located, to eliminate situations when we try to create a file using the path which can not be found

The file path is valid for any operating system (for example, we need to use a special function to get a path separator because it might be different for Windows and Linux)

This code creates a “logs” folder in the same directory where the script is located and passes all these script requirements mentioned above:

import org.apache.jmeter.services.FileServer; def path = FileServer. getFileServer (). getBaseDir (); new File (path + File. separator + "logs" ). mkdirs ();

Now we are ready to open a file to write inside. It should pass the same requirements regarding the independence of the OS system, and that’s why we need to use “File.separator” instead of some specific separator like “\” (Windows separator) or “/” (UNIX separator). Let’s use a meaningful name like “failed_responses.logs” for responses aggregation:

File file = new File(path + File. separator + "logs" + File. separator + "failed_response.logs" );

Now we are ready for the magic! In order to log only failed responses, we can go over each assertion and verify if it failed or not. In order to get all the assertions with their results, we can use the global JMeter variable “SampleResult”, while the iteration itself can be run very easy with the “().each” Groovy syntax:

SampleResult. getAssertionResults (). each { assertion -> if (assertion. isError ()) { //something to write } }

Now we have the file to write in and we have interaction cycle that filters out all the passed requests and allows us to do something when we get a failed assertion. Finally, we are ready to write something meaningful.

As we discussed, we want to log the response in case if the assertion failed. But in addition to that, it would be very useful to log the execution thread, as it helps to identify the exact user who has this failed request. Sometimes, issues and bugs can be user specific and that’s why this information is important as well. To write in the file by using Groovy, we can use such syntax:

file << Thread. currentThread (). getName ()+ ": " + ": Assertion failed for response: " + prev. getResponseDataAsString () + "

"

Therefore, our resulting script will look like this:

import org.apache.jmeter.services.FileServer; def path = FileServer. getFileServer (). getBaseDir (); new File (path + File. separator + "logs" ). mkdirs (); File file = new File(path + File. separator + "logs" + File. separator + "failed_response_${TestStartTime}.logs" ); SampleResult. getAssertionResults (). each { assertion -> if (assertion. isError ()) { file << Thread. currentThread (). getName ()+ ": " + ": Assertion failed for response: " + prev. getResponseDataAsString () + "

" } }

And as we are writing this script in the JMeter assertion, you should see something like this as a result:

Now, let’s run the script in non-GUI mode and verify that it works as expected. You can check this article which will give you a nice tips how you can do it using different ways.

After test execution we will find the expected file under the “logs” directory:

This looks great and is extremely useful! But something is still missing. Let’s assume you are going to run the script a couple of times one by one. We don’t have any file clean-up operation in the script and in this case we are going to add new content to the same file again and again. If we run our script once or twice, that’s okay. But let’s imagine you run the script 10 times. It will contain a huge amount of errors and responses and you will have no idea which one is related to which script execution. Luckily, we already know how to handle this tricky situation.

First, you might think that we can just clean up the same file and write in it again. But this is not a good practice. Sometimes, there are situations when you need to go back to some previous results of previous script executions. If we clean up the file on each iteration, we have no historical logs at all. That’s definitely not the right way. Instead, we can create a separate unique file for each execution. That sounds wise!

Let’s think what we can use as a unique value for each execution that will help us identify the specific log file. Right! Of course, it is time. Let’s create a variable of the moment when the script was executed (in JMeter you can get this using the function “${__time(d-MMM-yyyy-hh-mm-ss)}”) and put it in the user variables:

Now we need to make a small change to create the perfect script:

import org.apache.jmeter.services.FileServer; def path = FileServer. getFileServer (). getBaseDir (); new File (path + File. separator + "logs" ). mkdirs (); File file = new File(path + File. separator + "logs" + File. separator + "failed_response_${TestStartTime}.logs" ); SampleResult. getAssertionResults (). each { assertion -> file << Thread. currentThread (). getName ()+ ": " + ": Assertion failed for response: " + prev. getResponseDataAsString () + "

" }

As a result, each time you run the JMeter script, you will have a unique file based on the time this script was executed.

I hope this article was useful and now you have a clear idea how to write into files using Groovy scripting language in JMeter as well as some tools in your arsenal to catch some nasty heisenbugs which you might have during performance tests. Because if before you had some failures during your non-GUI JMeter performance test execution, you didn’t have any debugging information which might be helpful to identify the root of the cause. But right now you will have all failed responses logged into the file which you can analyse after test execution. You can find the JMeter script example we used through this link.

Now that you know how to catch Heisenbugs, you are ready to take our advanced JMeter course, free from our JMeter academy.

Click here to subscribe to our newsletter.

To try out BlazeMeter, which enhances JMeter abilities, put your URL or JMX file in the box below, and your test will start in minutes. You can also request a demo.