Coding and Testing: Testers and Programmers Working Together

Lisa Crispin, http://lisacrispin.com

One of my favorite quotes from Elisabeth Hendrickson is this: "Testing is not a phase!" We have to stop thinking of "testing" as some separate activity, removed from the rest of software development. It takes a village to produce a high-quality software product, and we can't do it from isolated functional silos. The idea of driving development with tests has been popularized by the agile development movement. It's not a new idea, but it seems to finally be taking hold on a large scale. The fact is that testing and coding are inseparable components of software development. We get the best results with testers and programmers work closely together.

How can we deliver real value to the business frequently? How can we know how much testing is enough? Let's look at how testers and programmers collaborate to produce high-quality software.

Driving Development with Tests

In agile development, we write high-level test cases before coding even starts. This is a good practice no matter what development methodology you're using. Testers are skilled at helping business experts clarify their requirements for a particular feature or piece of functionality, and using those to provide the big picture for what the code needs to do.

When coding starts, it's time for testers to turn the examples of desired behavior that they've elicited from customers into executable tests that will let the team know when the functionality is "done". If these tests aren't automatable, the programmers are unlikely to actually run them. However, as we write tests that will be automated, we also think ahead to the important exploratory testing we'll need to do as coding is completed.

Throughout this process, one or more testers will work together with one or more programmers to write tests, write code, write more tests, write more code, test some more, as many tiny iterations as needed until the appropriate business value is achieved.

When your team plans releases and iterations, think about the tests you'll need to help guide coding. Write appropriate task cards to write the test cases, automate them, and do the manual exploratory testing. Write appropriate task cards to design code for ease of test automation.

A universal complaint among software teams is "we never have time to finish testing" and "testing gets squeezed to the end". The usual cause? The team plans too much work in one iteration. Testers must actively participate in the planning. Your team needs to be realistic and plan only the functionality that they can code and fully test before the end of the iteration. No story is done until it's tested!

Start Simply

As a tester, I am naturally attracted to the scenarios where the code may be vulnerable - the edge cases, boundary conditions, and soap opera-style sequences. But to help my teammates write code, I have to start with the happy path. Write a test that shows that basic core functionality works correctly. Depending on your programming expertise and the tool you're using, you may automate the test yourself, or the programmer may write the fixture that automates it.

The programmer may look at the test and realize she has misunderstood the requirement, or maybe she thinks the tester has. This is great, because it means the tester and programmer have to talk. Tests that encourage communication are the best kind. The programmer writes code until the test can execute and pass. Now it's time to think of more complex test cases, and wander off into the potentially smelly areas of the code. Note areas that might be interesting for later exploratory testing.

Remember the purpose of these tests: to guide coding. Get the basics working, and add more tests as appropriate later on.

Assess Risk

No matter how much time we have for testing software, we can always use it up and wish for more. We never have enough time, and short iterations of agile development add to the challenge. Some quick risk analysis will help you decide where to focus your testing efforts.

List all the potential risks associated with the code you're writing, including those that go beyond functionality, such as security, performance and usability. Use the likelihood of a failure and the impact of that failure to help decide what tests to do first, and leave testing of lower-risk areas to last.

Small Chunks, Thin Slices

"Incremental and iterative" is the name of the game when you want to deliver value frequently, maintaining a sustainable pace. When my team faces a complex story, we spend some time breaking it down into what we call "steel threads", and others call "thin slices" or "tracer bullets". We start with a small, end-to-end path through the code which can be coded and tested, and tests can be automated for it. Once the coding and testing are complete on that small increment, we add on the next chunk of functionality.

Let's look at a team that is developing software for an Internet retail website. They're starting on this story:

As a shopper, I want to choose the shipping option for my order, and see the shipping cost, so I can decide which shipping option I want.

The team may decide that their first slice through this code is to present only one shipping option on the UI, calculate the cost for that option, and display it on the next page. They might even take the UI out of the equation, and focus first on passing the shipping option, destination and item weight to an API which will return the estimated shipping cost. That functionality can easily be tested "behind the GUI", with a tool such as Fit or FitNesse. Later "slices" add more shipping options, flesh out the UI, provide a way to change the shipping option, and navigate to the payment page.

Each small increment is produced by testers and programmers working together, writing tests that demonstrate correct code behavior, and code that makes those tests pass.

Coding and Testing Progress Together

Let's look at an example of how a tester and programmer might work on a user story or feature. Patty Programmer and Tammy Tester are working on a user story to calculate the shipping cost of an item, based on weight and destination postal code. Tammy writes a simple test case in a tabular format that is supported by their Fit-based test tool:

Weight Destination Postal Code Cost 5 kg 80104 $7.25

Meanwhile, Patty writes the code to send the inputs to the shipping cost API and to get the calculated cost. She shows Tammy her unit tests, which all pass. Tammy thinks Patty's tests look ok, and they agree Patty will check in the code.

Next, Patty checks in a fixture to automate Tammy's tests. Patty calls Tammy over to show her that the first simple test is working. Tammy writes up more test cases, trying different weights and destinations within the U.S. Those all work fine. Then she tries a Canadian postal code, and the test dies with an exception. She shows this to Patty, who realizes that the shipping cost calculator API defaults to U.S. postal codes, and requires a country code for postal codes in Canada and Mexico. She hadn't written any unit tests for any other countries yet.

Tammy and Patty pair to revise the inputs to the unit tests. Then Patty pairs with Carl Coder to change the code that calls the API. Now the test looks like this:

Weight Destination Postal Code Country Code Cost 5 kg 80104 US $7.25 5 kg T2J 2M7 CA $9.40

This back-and-forth testing and coding process could take all kinds of forms. Patty might write these "story tests" herself, in addition to her unit tests. Or, she and Tammy may decide that they can cover all of Tammy's acceptance tests with unit-level tests. Patty might be in a remote office, using an online collaboration tool to pair with Tammy. Either or both might pair with other team members. They might need help from their database expert to set up the test database. The point is that testing and coding are part of one process, in which all team members participate.

Tammy can keep identifying new test cases until she feels all the risky areas have been covered. She might test with the heaviest possible item, and the most expensive destination. She might test having a large quantity of one item, or many items to the same destination. Some edge cases might be so unlikely she doesn't bother with them. She may not keep all the automated tests in the regression test suite. Some tests might be better done manually, after a UI is available.

The Power of Three

Patty has written unit tests with Hawaii as the shipping destination, but Tammy thinks only continental destinations are acceptable. They both go to talk to the product owner about it. This is the Power of Three. When questions arise, having three different viewpoints is an effective way to make sure you get the right solution, and you don't have to re-hash the issue later. This helps prevent requirement changes from flying in under the radar and causing unpleasant surprises later.

It's vital that everyone on the development team understands the business, so don't fall into the habit of only having a tester, an analyst or a programmer communicate with the business experts.

Ways to Improve Programmer-Tester Collaboration

The story about how Tammy and Patty work together shows how closely programmers and testers collaborate. As coding and testing proceed, there are many opportunities to transfer skills. Programmers learn new ways of testing. Testers learn more about code design and how the right tests can improve it.

Pair Testing

Patty has completed the UI for selecting shipping options and displaying the cost, but hasn't checked it in yet. She calls Tammy over to her workstation and demonstrates how the end user would enter the destination postal code, select the shipping option, and see the cost right away. Tammy tries this out, changing the postal code to see the new cost appear. She notices the text box for the postal code allows the user to enter more characters than should be allowed for a valid code, and Patty changes the html accordingly. Once the UI looks good, Patty checks in the code, and Tammy continues with her exploratory testing.

"Show Me"

Tammy is especially concerned with changing the postal code and having the new cost display, as they identified this as a risky area. She finds that if she displays the shipping cost, goes on to the next page of the UI, then comes back to change the postal code, the new estimated cost doesn't display. She asks Patty to come observe this behavior. Patty realizes there is a problem with values being cached, and goes back to her workstation to fix it.

Showing someone a problem real-time is much more effective than filing a bug in a defect tracking system and waiting for someone to have time to look at it later. If the team is distributed and people are in different time zones, it's harder to do work through issues together. The team will have to make adjustments to get this kind of value. One of my teammates is in a timezone 12.5 hours ahead, but works late into his nighttime to overlap with our morning. We work through test results and examples when we're both online.

Show the customers, too. As soon as you have a prototype, some basic navigation, some small testable piece of code, show it to the customer and get their feedback. Feedback, from our customers, from our automated tests, from each other, is our most powerful tool in staying on track and delivering the right business value.

Knowing When We're Done

Remember: "No story is done until it's tested". Now that we've learned how testing and coding fit together, there's not a big lag time from "coding is finished" to "testing is finished".

By the time coding is finished, we usually have FitNesse tests covering all the functionality, including edge cases and boundary conditions. We've explored the functionality of each "thin slice" or "small chunk". Now we want to learn how the finished feature works, so we do end-to-end exploratory testing. We may use automated scripts to help set up test data or scenarios, but mainly we're using our heads, eyes, ears and intuition to make sure that this part of the product will delight the customer.

During this post-development testing, we might realize that although we have delivered the precise customer requirements, our feature might be lacking in usability, performance, security or some other aspect of quality. We may find that the new code impacts some other part of the application in an unexpected way. We might fix this right away, or write new stories to be done next iteration.

If we've managed the scope of our work correctly, we have production-ready code at the end of our iteration. There may be enhancements planned for later, but we've produced stable functionality the business can use now. We understood what the business people wanted, and found a practical way to implement it. We have a suite of automated regression tests for this functionality, so when we make changes next iteration, we'll know if we break anything.

The Payoff

When we divide our work into small, manageable chunks, plan and conduct testing and coding as part of a single development process, and focus on finishing one chunk of valuable functionality at a time, testing doesn't get squeezed to the end, put off to a future iteration, or ignored altogether. I've seen this proven out on a variety of teams - like most pragmatic, common-sense approaches, it works.

Get your team together today and talk about how you can all - testers, programmers and everyone else involved with delivering the software - work together to integrate coding and testing. Instead of investing in a big requirements document, capture requirements and examples of desired application behavior in executable tests, and write the code that will make those pass. Meet with your business stakeholders to understand their priorities and explain how much work you can realistically take on each iteration. Stop treating coding and testing as separate activities.

It won't happen overnight, but gradually your team will get better and better at really finishing each software feature - including all the testing. Your customers will be delighted to get stable, robust software that meets their needs. Your team will benefit from better-designed code that's easier to maintain and contains far fewer bugs. Best of all, testers and programmers alike will enjoy their work much more!

Related Agile Testing Resources

Software Testing Team Dynamics

Exploratory Testing: Finding the Music of Software Investigation

An Agile Tool Selection Strategy for Web Testing Tools

More Agile Knowledge

Scrum Expert

Agile Tutorials and Videos

Click here to view the complete list of archived articles

This article was originally published in the Summer 2009 issue of Methods & Tools