On the teams where I have worked, a common topic during Sprint Retrospectives is what exactly should be included in the acceptance tests for each user story. How does the Analyst best communicate to the team how the software should behave? How will the Tester know what tests to execute? I really like the joke from Bill Sempf, “QA Engineer walks into a bar. Orders a beer. Orders 0 beers. Orders 999999999 beers. Orders a lizard. Orders -1 beers. Orders a sfdeljknesv.” I think this describes how a good Tester will try to break the software application, using both the requirement and their experience. Imagine the Analyst writing the functional requirement—I’m sure it was that the user is able to order a beer, not that a user cannot order a lizard.

First, a little background

At my company, Scrum development teams utilize acceptance test driven development (ATDD). We use the Gherkin format to specify the requirements for each user story, writing scenarios or acceptance tests that the Developer can then use to write unit tests. We also have defined HR professions in IT at my company, and those professions are represented on each Scrum team—Developers of course, but also dedicated Analysts, Testers, and a Scrum Master. So, we have one group writing the story cards, another group building the software, and a final group testing the finished work within each sprint. Its easy for the teams to get into an iterative mindset, instead of being truly agile—watching user story cards move from each distinct step in the sprint process to the next, with the inherent expectation that the work is complete in each step, prior to advancing to the next. This concept is generally reinforced by the Kanban board the team uses, with defined process steps like Backlog, Analysis, 3 Amigo, Development, Test, and Done.

Back to those Gherkin acceptance tests

The common understanding is that the acceptance test scenarios should specify how the application should behave after the work is completed. What sometimes happens though, is the Analyst begins adding scenarios that need to be tested; instead of tests the finished product must pass to be considered complete. Adding all these tests will bloat the story card, make the analysis process take longer, and begins to turn Developers and Testers into order takers, rather than allowing them to decide how to best develop and test the software. This introduces a risk that if a test is not listed on the story card, it won’t be performed.

Let’s look at an example. I’ll use an insurance requirement, since I work at an Insurance company. Let’s imagine the software already exists, and we are adding functionality to enforce a new business rule.

Business Rule

For each policy sold on or after January 1, UMBI coverage must be included.

Functional Requirement

When a user gets a quote for a policy with an effective date on or after January 1, the option to decline UMBI will not be present.

When we write story cards, we specify the functional requirements as acceptance tests, in Gherkin format. So the acceptance test scenario would be:

GIVEN the user is getting a quote with an effective date on or after January 1

WHEN the user selects UMBI coverage

THEN the Decline option will not be available

Pretty clear, right? This format allows the Product Owner, Analyst, Developer, and Tester to all have the same understanding of the requirement for the user story card. The team already knows how the software works today, so we don’t need to write out all the functionality that is already in the current state, including all the still valid options for UMBI coverage. The acceptance test is efficient—it is clear and unambiguous, regardless of the path taken by the user through the software, if the effective date is after January 1, UMBI cannot be declined.

But here’s what tends to happen. Since the Analyst feels like they should be as specific as possible to describe the requirements, they feel like it is important to capture and communicate other important scenarios and alternate paths that should be tested. After all, we are writing acceptance tests, right? So we start to see more scenarios added to the user story card like these:

GIVEN the user is getting a quote with an effective date before January 1

WHEN the user selects UMBI coverage

THEN the following options will be available: Decline, $50,000, $150,000, $250,000, $500,000

GIVEN the user is getting a quote with an effective date on or after January 1

WHEN the user selects UMBI coverage on the UMBI Coverage page

OR the user selects to modify their UMBI coverage selection on the Confirmation page

THEN only the following options will be available: $50,000, $150,000, $250,000, $500,000 (no Decline)

AND their selection is retained when they click Next and then go Back to the UMBI page

AND their selection is retained when they save the quote, exit the application, then come back to retrieve their saved quote

AND a the user’s selected UMBI coverage amount is shown on the Confirmation page

Sure, these scenarios need to be tested—the Tester needs to ensure the software doesn’t allow a user to decline UMBI coverage after January 1. The QA engineer orders a lizard, remember? Boundary analysis, including negative testing and regression testing for the change, is the domain of the Tester—they will write all the scripts necessary to ensure the software behaves according to the requirement.

As an Analyst, spending time writing out all the test scenarios at the time the story card is written assumes that the story card is the only communication vehicle between the Analyst and the Developer and Tester, and that it is handed off without further interaction. It assumes there is some need for the Analyst or the team to CYA in case there is a defect later. In agile software development, “we value…customer collaboration over contract negotiation”, so it is important to communicate and collaborate throughout the sprint, including relying on the team’s understanding of the software, what is already in the code, and not to carefully define the requirements at the beginning and get stakeholder sign off, like the requirement is part of a waterfall project. We’ve even built steps into the overall sprint process to ensure collaboration happens, like the Sprint Planning Meeting and Three Amigos meetings.

I hinted at it earlier, but where does the information come from for the team to handle all the possible scenarios that go with the story card if the Analyst doesn’t list them out? Where does the information for negative testing, regression testing, boundary testing, or break testing come from, if not the requirements on the story card? It comes from all the story cards before this one, which is the combined knowledge of the team. We rely on the knowledge of the Developers and Testers to communicate with the Product Owner and Analyst to solve problems as they arise, not merely to take orders exactly as they are written in the story card.

Of course, there are always opportunities to break the rules when it comes to the art of specifying functional requirements and acceptance tests, and the team will work those out on their own; but we seek efficiency and empowerment in how we handle the requirements—not dictation and blind follow through. Let Developers do their jobs—they know how the application works. Let the Testers fully exercise the software to ensure a given criteria is true in all cases—they know where it will break and how to break it.