As the Sage ERP development teams continue their journey into Agile Development, we are looking to even more fundamentally change the way we do things. We have adopted the Scrum methodology and are now operating in two week sprints. Within those sprints we take user stories and complete them to very exacting “doneness” criteria. The goal is to have a releasable product at the end of each sprint (releasable in a quality context, but not necessarily functionally complete). So how do we perform all the development, QA, UCD and documentation within small two week sprints? What we are looking to do is to truly use a Test Driven Development Process. This blog post is intended to outline what some of our Agile Teams are already doing and what the rest are looking to adopt soon.

We switched from using the Waterfall Software Development Methodology to the Agile Methodology which gave us a number of good gains. The main one being containing bugs to sprints, so we don’t end up at the end of a project having to fix a huge number of bugs in a QA or regression phase after code complete. However there is a lot of temptation for people that are used to the Waterfall Methodology to start doing mini-waterfalls with-in the Agile sprints. Then for each story, they would design – code – QA –bugfix the feature as a little waterfall within the Sprint. In Agile the QA and programmer should work together and there should be no separation of work like this. As a first step to stopping this we shortened our Sprints from three weeks to two weeks. This made it very hard for people to do mini-waterfalls within the sprint since there just isn’t enough time.

Another goal of Agile Development is to make extensive use of automated testing so that rather than having to continually manually test the product to ensure bugs aren’t creeping in, instead there are automated tests that run on every build of the product to ensure this doesn’t happen. There can be a perception that adding automated tests takes too much time and slows things down. This doesn’t take into account that finding and fixing bugs takes way more time and resources.

To solve these problems we have adopted Test Driven Development within the Agile Sprints. Here we reverse the development process to some degree and write the tests first. This has a number of benefits:

QA team members don’t need to wait for programmers to finish coding before they have anything to do.

Automated tests are developed as part of the process and are fundamental to the process.

This forces the story to be well defined up front since the first thing you need to do is define how to test it.

Immediately the biggest gain we got from this approach was far better defined stories with far better acceptance criteria. This saved a lot of re-work where programmers implemented something without fully understanding the story or acceptance criteria and then the story would fail because of this.

The diagram below shows the steps we follow executing stories during an Agile Sprint (developed by the Accpac Super team):

Notice how we write the test cases first as we investigate the story. Only then, after this is all understood and reviewed do we actually do the implementation.

A common problem with Agile is that programmers will code quite a few stories and then give them to QA near the end of the sprint. This overwhelms the QA testers and doesn’t give them sufficient time to do all their testing. With this new approach, all the test cases are written up front and well defined, then if too many things end up at the end of the sprint, other people like Business Analysts, Writers or Programmers can help execute the test procedures to complete the testing.

So far this sounds fairly manual. But there are good tools to help automate this process.

Fit and FitNesse

FitNesse is a Wiki and an automated testing tool. It is oriented around doing acceptance testing rather than unit testing. A lot of times programmers look at this tool and dismiss it as another unit testing framework. The difference is subtle but important. For the programmer Fit is similar to unit testing frameworks, the work the programmer does is very similar to writing unit tests. However you are more writing an API for writing tests than the actual tests. Then FitNesse gives a mechanism for QA (or other non-programmers) to feed input into these tests to cover all the necessary test cases. In the Wiki you document the acceptance criteria and enter all the tests to prove that the acceptance criteria are met. Then since these are automated tests they will be run as often as desired (perhaps on every build) to ensure this story’s acceptance criteria continue to be met.

We use EasyMock to be able to test components in isolation, so modules can be tested without requiring the whole system be present. This makes setting up the tests far easier, makes it very easy for them to run in the build system environment and allows them to run very quickly (since things like database access are mocked).

We’ve found this to be a very powerful tool to document and produce automated tests. It allows QAs who know the product and what it should do, to create automated tests without having to be programmers. You can probably find standalone tools that do the individual parts of FitNesse better, but the power comes from how they work together and enable the team to create the tests and automate the acceptance testing.

Jenkins

Jenkins (previously known as Hudson) is a continuous integration and build tool. It uses Ivy to define all the dependencies between all the modules in a system. It then scans source control (we use SubVersion) to see when files change. When it detects that a file has changed it will re-build any modules that it is part of and then based on the Ivy dependency map, it will build any modules that depend on these and so on. As part of this process it can run any associated automated tests, run tools like FindBugs or Emma as well.

The goal of this then is to report back to a team as soon as possible that something they have done has broken something else. Bugs are cheaper to fix the sooner they are found. If they are found right away while some one is working on the offending code, they can usually be immediately and easily fixed. Combining Jenkins and FitNesse is a great way to accomplish this.

Regression Testing

In addition to unit testing and automated acceptance testing, we also run full automated tests on the whole system.

The Accpac team uses Selenium. Selenium drives the system by driving the Browser. It uses the automation interface to the Browser to drive our system. This then looks to the system like an end user and all parts of the system are used. I blogged previously about our process of moving the VB screens to the Web, here I described our Porting tool to initially move the VB screens to SWT. When we generate the SWT project, we also generate a number of Selenium tests for each screen automatically. So as a starting point we have automated Selenium tests to drive the basic CRUD functionality of each screen.

For non-Web products like MAS, Peachtree, Simply Accounting and Peachtree, we use SilkTest for automated system testing. We have developed fairly extensive suites to tests to provide good automated coverage for all these products.

To the Cloud

What we are really striving to do is that after each Agile Sprint we are in releasable state. This means that we could post the result of each Agile Sprint to a hosted site where real customers are running our software. These customers would get the benefits of whatever improvements we made during the Sprint with no worry that we have broken anything.

Whether we actually ever do this is a separate question, but we are striving towards having our software always in a releasable state, so we can make this decision based on the business and market needs. The trend in the software industry is towards frequent smaller releases rather than giant infrequent releases. Generally this leads to a much faster pace of innovation (just look at how fast Google Chrome is evolving) and provides much faster customer feedback into the development system resulting in software that matches customer needs far better.

Although most of our competitors aren’t operating like this yet, eventually they will or they will be replaced by companies that do. We want to ensure that Sage can develop software as efficiently as anyone and that we have top product quality. We also want to ensure we are competitive and can innovate at a very competitive pace. Hopefully you will see this over the coming product releases.

Summary

We are still learning about effective test driven development at Sage, but already the teams that are pioneering its use are seeing great results. We’ve done automated regression testing with tools like Selenium and SilkTest for some time now and seen good results, now it’s a matter of taking it to the next level. The goal being to shorten our development/release cycles while increasing product quality at the same time.