In the API Testing world, if an automated testing tool doesn’t provide reusability then you will pay a great cost while maintaining the test suites over a period of time. Reusability and maintainability are two sides of the same coin. If reusability decreases then maintainability also decreases and vice versa. If you are having a small number of APIs/test cases then reusability might not be a concern for you, but if your whole architecture is based around APIs then reusability plays a great role while creating the automated test suites for those APIs. And it is very important to think about this dimension while choosing the API automation framework/tool. We should follow the DRY (Do not repeat yourself) principle.

Reusability can be done at various levels while doing the testing automation. e.g.

While doing scenario-based testing, repeated test cases can be defined once and reused at multiple places. While doing the API testing through swagger schemas, if multiple APIs returns the same type of response, then we should not repeat the schema validation logic or schema multiple times to validate the API response. For validating the positive and negative scenarios of an API, the only thing which changes is the test input data. In this case, as well we may reuse the whole test logic. Response Validation logic which is common among several APIs can also be reused.

In vREST Desktop, the issue of reusability and maintainability is on top priority for us so that a great amount of effort can be saved at a later stage while maintaining the test suites. We have seen a lot of failed automation efforts just due to this issue.

Next, we will see, how vREST Desktop handles the reusability concern.

Reusing the same test cases across scenarios

There are various scenarios in which the same set of test cases or steps are repeated. It should be possible to have a single repository of test cases to avoid the duplication of requests in each workflow. Consider the following scenarios:

Scenarios:

Scenario 1 Login to Application Create a record Update the record validating the condition 1 Scenario 2 Login to Application Create a record Update the record validating the condition 2 …

In the above scenarios, Ist and IInd step of each scenario are duplicates. In vREST Desktop, instead of duplicating the steps, we can maintain a global list of steps and reuse the same steps by referencing them. So, our solution will look like this:

Global List:

Login to Application Create a record Update the record validating the condition 1 Update the record validating the condition 2

Scenarios:

Scenario 1 Reference to Login to Application Reference to Create a record Reference to Update the record validating the condition 1 Scenario 2 Reference to Login to Application Reference to Create a record Reference to Update the record validating the condition 2 …

Reusing the Swagger/OpenAPI schema definitions

The way, Swagger/OpenAPI provides us the ability to define the schema definitions in a global manner and we may reference those schema definitions in our API specs. We may even further break our schema definitions into small reusable chunks and cross-reference them. It should be possible to reuse the schemas in the same way. In vREST Desktop, we maintain a global list of schema definitions as it is defined in the swagger spec.

The benefits we gain with this approach are:

Importing of swagger spec becomes easier. APIs test cases can be easily validated using the swagger schema references. It further enables us to sync the swagger schema easily over a period of time to incorporate the swagger schema changes.

For more information, please read my blog post on “Easily validate your REST APIs by using the swagger definitions“.

Reusing the validation logic for positive and negative scenarios

While validating the positive and negative scenarios of an API, the validation logic is mostly the same. The only thing which changes is the test input data. Negative scenarios mostly share a single JSON schema or a small set of JSON schemas. How do we handle such a scenario?

In vREST Desktop, we provide data-driven testing to cleanly handle this scenario. The test data is completely separated from the validation logic. This approach offers various benefits as compared to the traditional approach of API Testing.

Reduction in the number of test cases Less technical knowledge required to write test cases Highly maintainable test cases Enables the option to generate Test Data via some library or script

For more information on this approach, please read this section of vREST Desktop Guide on data-driven testing.

Also, you may visit this sample test data repository for data-driven testing.

Let us see, how it works. Consider various scenarios of the Update Contact API test case as below. Suppose this API accepts input for fields name, email, designation, organization.

Validate whether Update Contact API should return an error when name field is blank Validate whether Update Contact API should return an error when name length is greater than the permissible length limit Validate whether Update contact API should return an error when designation length is greater than the permissible length limit Validate whether Update contact API should return an error when organization length is greater than the permissible length limit Validate whether Update Contact API should return an error when setting invalid email for a contact Validate whether Update Contact API by ID works properly with all valid details

The above list can be very exhaustive when we apply various permutations and combinations on our API input domain. If you closely look at the above test cases then you can see that the test logic is mostly the same, the only thing which changes is test input data.

With data-driven testing, we can write this test input data and partial validation logic into an excel sheet and feed that excel sheet input to our test API to validate various scenarios.

vREST desktop provides powerful integration between the excel sheet and the application. You will just need to write your test data in the excel sheet and instantly you may run the scenarios in vREST Desktop Application. Even you don’t need to upload the excel sheet at all after each change or event at the beginning. vREST Desktop even further writes the test logic (expected response body) for you automatically in the excel sheet.

Common Response Validation Logic

There are several situations in which test cases share a common validation logic and various utility functions are also required to generate the test input data. So, we should be able to reuse the validation logic or generic utility functions.

In vREST Desktop, in most of the situations, you will not need to write a single line of code. But there may be some complex situations in which custom response validation is necessary to cater your specific validation needs. We provide users the ability to write custom response validators which can also be reused across test cases. Users can also write custom utility methods which can also be reused across test cases.

Finally, Reusability is a major concern for vREST Team. We try to build each new feature with this concern in mind and try to solve the issue at its core. We designed vREST Desktop with these things in mind to make your API testing experience more pleasant. Just visit vREST Desktop website and try out vREST Desktop and let us know your feedback and also enlighten us to improve reusability further.