Automation Design & multiple implementations for Test

Stacks like ELK mean that we can visualise and query how our systems are functioning. The good and the bad. In real-time and historically. The basis for this is a sensible and structured log architecture. During the development process — as anyone who has ever coded anything will know — logs are incredibly important for information.

Logs are great.

When it comes to the functional testing of systems logs are not usually the go to interface. The usual goal is to put things into the system and make assertions against some other ‘user’ endpoint. Typically some front-end or a structured API. However, there may be instances where there is no clean way to determine the actions of a middleware system for example.

As microservices become the standard application architecture. Testing becomes more about the integration of lots of small systems passing data around. So, though logs are typically for debugging — whether Live or Local- the automated checking of logs I feel is a growing edge-case for a Tester.

I recently used some variation of this projects logic to check logs in a Kubernetes context.

#log source

kubectl logs <pod-name> --since=10m

#then manipulate as desired

Similarly, tests are more frequently executed on a linux OS in a Container or some other Cloud Execution Environment. Opening the door to a tonne of extremely powerful and readily available tools and utilities.

In this little project I have an example semi-structured log.txt file which I want to make assertions against. I do this using Bash, JUnit5 and SoapUI. Using a combination of shell commands in all cases.

This is for both my own learning and hopefully for useful reference material for myself and others in the future.

Using purely bash for this type of task is easiest. This is because as soon as one executes shell commands via another programming language your application becomes the outcome of some minor system integration as opposed to just running on it’s own. Still, it will often be preferable to execute under the control of your current stack so I do it using Java and SoapUI as well.

An assumption here is that any logs in question are formatted in a somewhat rule-based manner. Where events in the system can be tracked uniquely. This is key to any effective log usage.

Repository

Where the code for all parts of the project can be found with:

https://github.com/njgibbon/shell-log-test

Problem

log.txt: https://github.com/njgibbon/shell-log-test/blob/master/log.txt

---

START LOG TYPE

ID 001

EVENT A

EVENT B

EVENT C

RESULT SUCCESS

END LOG TYPE

---

START LOG TYPE

ID 002

EVENT A

EVENT B

EVENT C

RESULT SUCCESS

END LOG TYPE

---

START LOG TYPE

ID 003

EVENT A

LOGIC ERROR

RESULT FAILURE

END LOG TYPE

---

START LOG TYPE

ID 004

EVENT A

EVENT B

VALIDATION ERROR

RESULT FAILURE

END LOG TYPE

---

START LOG TYPE

ID 005

EVENT A

EVENT B

EVENT C

RESULT SUCCESS

END LOG TYPE

---

In our example input each log transaction is delimited by a “START LOG TYPE” and “END LOG TYPE”. And identified by a unique ID number. Within the log extract various events are recorded. We want to run a test against each transaction recorded in the log. We want to separate the transaction from the rest of the log and then assert against the “SUCCESS” keyword. We assume that there could be any number of events output between the delimiting statements.

Design / Logic

As with any Automation Design; Assuming we have a valid objective, we go through the steps that would be required to undertake the tests using manual shell commands.

1 Given the ID, take the log extract and save to a temporary file.

Find the line number of the ID.

cat log.txt | grep -n “001” | head -n 1 | cut -d: -f1 3

Find the line number of the start of the log extract. Cut the log at the ID. Then return the line number of the last instance of “START LOG TYPE”.

cat log.txt | head -n 3 | grep -n “START LOG TYPE” | tail -n 1 | cut -d: -f1 2

Find the line number of the end of the log extract. Cut the top of the log at the start line. Then return the first instance of “END LOG TYPE”.

cat log.txt | tail -n +2 | grep -n “END LOG TYPE” | head -n +1 | cut -d: -f1 7

Cut the log between the start and end lines and save this to a file.

cat log.txt | tail -n +2 | head -n +7 > logExtract.txt

cat logExtract.txt START LOG TYPE

ID 001

EVENT A

EVENT B

EVENT C

RESULT SUCCESS

END LOG TYPE

2 Now, Search for the “SUCCESS” keyword against the extract. Output number. If 0 then test failed.

cat logExtract.txt | grep -n “SUCCESS” | wc -l 1

This logic now just needs to be automated. Any part could of course be adjusted to accommodate different requirements.

Bash

The logic is automated through reusable functions. With some suite execution structure and reporting added. It should be easy enough to understand by reading the main.sh file.

Solution

https://github.com/njgibbon/shell-log-test/tree/master/bash

https://github.com/njgibbon/shell-log-test/blob/master/bash/main.sh

Execute

bash main.sh

... cat temp/results.txt Results:

Test Passed: 3 / 5

Tests Failed: 2 / 5

Executing by proxy

For execution using Java and SoapUI we follow the same logic with some slight variation. Whilst it would be possible to execute line by line. For ease, we separate the the execution into 2 .sh files and instead just execute the files whilst passing in arguments. 1 file for the subset log extraction to a temporary file by ID. 1 for asserting against this temporary log file by Keyword.

JUnit5

JUnit5 is used to drive the execution and generate the reports.

Solution

https://github.com/njgibbon/shell-log-test/tree/master/java

Test Suite Class: https://github.com/njgibbon/shell-log-test/blob/master/java/src/test/java/com/njgibbon/test/junit/shellLogTest/suites/TestSuite1.java

Reusable Shell execution Class: https://github.com/njgibbon/shell-log-test/blob/master/java/src/test/java/com/njgibbon/test/junit/shellLogTest/common/ShellLogTest.java

Execute

chmod +x assertAgainstLog.sh

chmod +x extractLog.sh mvn test -Dtest=TestSuite1

... Results : Failed tests:

TestSuite1.test3:42 expected: <true> but was: <false>

TestSuite1.test4:53 expected: <true> but was: <false> Tests run: 5, Failures: 2, Errors: 0, Skipped: 0

SoapUI

This follows the same logic as the Java project but the code is repetitive instead of modular. This is just as I don’t know how to make a library-type function in SoapUI. A cursory internet search shows there are options available to look at if the need arises.

Solution

https://github.com/njgibbon/shell-log-test/tree/master/soapUI

I have separated two examples of the Groovy Steps into files so it could be more easily read and understood in isolation from the project.

Groovy

https://github.com/njgibbon/shell-log-test/blob/master/soapUI/groovy/extractLog.groovy

https://github.com/njgibbon/shell-log-test/blob/master/soapUI/groovy/assertAgainstLog.groovy

Execute

By importing the SoapUI project in your client and hitting play on the Test Suite. Or alternatively via CLI:

chmod +x assertAgainstLog.sh

chmod +x extractLog.sh mvn com.smartbear.soapui:soapui-maven-plugin:5.4.0:test

... SoapUI 5.4.0 TestCaseRunner Summary

-----------------------------

Time Taken: 915ms

Total TestSuites: 1

Total TestCases: 5 (2 failed)

Total TestSteps: 10

Total Request Assertions: 0

Total Failed Assertions: 0

Total Exported Results: 2

Context

Wrote and ran on a 2017 MacBook Pro.