Keith Klain is the head of the Global Test Center for corporate and investment banking and wealth management at Barclays Bank. He manages hundreds of software testers in United States, Europe and the Asia-Pacific region.

Most large organizations that structure testing as a centralized function use something Klain calls "factory methods" to manage the work. Life in the test factory is an assembly line. A small group plans the work and a larger group executes these "test cases," adhering to detailed step-by-step directions.

Analysis: Rethinking Software Development, Testing and Inspection

What many people see as a strength of the factory is its precise repeatability. Klain and his team see that as a weakness.

Humans following step-by-step scripts tend to ignore everything off-script, creating a sort of inattentional blindness. They lose what is known in chess as the ability to "see the whole board," then adjust to the situation in the moment.

Keith Klain is leading the test transformation effort at Barclays Bank.

In order to get to "precise" repeatability, some companies insist the scripts be followed exactly the same way each time. This eliminates the testers' ability to react, learn and change approach. It's the kind of thing that happens in chess every time an opponent makes an unexpected move.

One alternative to detailed direction is to have the person doing the work drive it—that is, to design, execute and report test results while learning and adapting. That's something Cem Kaner called "exploratory testing" in the first edition of his book, Testing Computer Software.

Klain and his team hold it up as an example. It's not the only way to test, but, perhaps, it's a place to start.

The Need for Software Testing Change at Barclays

Klain says the factory model that forms the basis of traditional testing is breaking down and cannot meet the needs of competitive companies. "Over the last 15 years or so, software testing has frequently been prioritized to adopt outsourcing and offshoring extensively, and the financial models used to justify that decision are leveling out due to rising wages, cost of living increases and currency fluctuations," he explains.

"Most of the improvement models used to rationalize the commoditized testing approach use strictly quantitative metrics to assess quality or measure improvement, an approach which breaks down rather quickly beyond any first-order metrics," says Klain. "There is an increased focus on business value and testing skills, which means you have to bring more to the table than just the ability to do it cheaper."

The term Klain uses for this is "test transformation." It's reminiscent of other companies that might perform lean or agile transformations—yet all too often those changes leave the test process behind.

Klain describes test transformation this way: "There was a wealth of talent here to build on, so the transformation process has been more evolutionary than revolutionary in nature. Our main concerns are ensuring that our test approach is aligned to the business we support, our tools and process are lightweight and can handle multiple project types, and that we are hiring the best testers in the industry."

In Depth: Can New Software Testing Frameworks Bring Us to Provably Correct Software?

Part of that transformation, Klain continues, means developing a "culture of professional testing" that drives how Barclays first recruits and then develop testers. This culture focuses our training, coaching, and mentoring programs that, in turn, hone in on testing skills such as heuristic test strategies, visual test models, exploratory testing and qualitative reporting.

If your team hasn't heard of heuristic test strategy or visual test models, or doesn't discuss qualitative reporting as a skill, then you may be missing out on opportunities for improvement.

Heuristic test strategy, for example, lets teams come up with the best test approaches and compress test cycles while finding important bugs earlier, with models and reporting improving communication with senior management. But where did it come from?

How Context-Driven Emerged From the Schools of Software Testing

Bret Pettichord is a tester from Austin, Texas, a former consultant for Thoughtworks and an early contributor to both WATiR and the Selenium Projects. It was 2003 when Pettichord first gave his presentation, Schools of Software Testing, which identified distinct ways of thinking about the testing problem. Pettichord identified the previously mentioned factory method, or school, which believes in a making testing a repeatable process.

Bret Pettichord defined the schools of software testing. More than a decade later, he's still in the business as a quality assurance manager at Blackbaud.

In addition to the Factory School, Pettichord also named an Analytic School, which uses academic models to create test cases; the Quality School, which focuses on prevention, and the Context-Driven School, which applies different tools for different problems.

A context-driven tester might, for example, use a great deal of automation for a batch program that would be maintained for years but might not use any for a video game to be deployed to the iTunes store just once. Pettichord listed exploratory testing as an exemplar for this school; 10 years later, it's a core part of Barclays' training curriculum.

Analysis: Software Testing Lessons Learned From Knight Capital Fiasco

Nearly all software testing begins as with some amount of exploration. A human checks the work by running it, learning it and adapting the test approach over time based on feedback from the software itself. While this might be perfectly sufficient for a single person writing a video game for iPhone, or a computer science student checking work before turning it in, it is widely derided in larger IT service organizations as unrepeatable, ad hoc or unable to scale.

It's certainly true that exploratory testing is rarely repeated. The question is the value of repeatability. Exploratory testing proponents would ask, if the number of possible input combinations is infinite, wouldn't testing with different values, and different paths through the software, actually increase coverage over time? For that matter, if the software has different features with every build, along with different known risks, why test it the same way?

As for an inability to scale, Barclays Bank, along with others such as Raymond James Financial, seem to be proving that statement wrong.

Testing Is Dead, Long Live Testing

A few years after his initial presentation Brett Pettichord added the Agile School. This focuses on the programmer's perspective of testing and holds up unit tests, specifically test-driven development, as an exemplar. This kind of work—done by programmers, for programmers—can complement exploratory testing, as it improves code before it's explored from the customers viewpoint, reducing churn and waste from obvious defects.

Adherents to the context-driven school tend to talk about sapient testing so-called because it requires judgment and skill and is therefore work best suited for humans. The difference between the two has led to a belief that context-driven testers are opposed to automation. As Iain McCowatt points out, that isn't correct, strictly speaking. However, context-driven testers may offer additional, alternative ways of approaching testing other using tools to drive a browser.

Product News: Hewlett-Packard Simplifies Automated Software Testing Suite

Silicon Valley companies trying to evolve testing may be more familiar with something Alberto Savoia, a director of development at Google, referred to as "test is dead" in a keynote at the Google Test Automation Conference. Like the context-driven school, "test is dead" suggests the factory school can't scale to the challenges of today. It offers a different prescription, though.

Proponents of the meme see tendencies toward intense production monitoring, the capability to roll back changes in production quickly and GUI-driven test automation. All this is combined with massive exploratory testing, probably through a crowd-sourced vendor such as uTest. Companies that can do all that while offering free services on an extended beta (think Facebook or Google Mail) may just be able to eliminate traditional testing entirely.

"Test is dead" thinking doesn't reject context-driven testing as much as embraces it: It lays down a specific strategy that's appropriate for very specific conditions. Companies that don't give away the software for free and make money from advertising may need to consider a different model, though.

Microsoft's operating system division, for example, certainly has a different model, with a purchase fee and no push-button rollback. After Microsoft shipped Vista, it looked at its test process and decided to shift back to manual and exploratory testing, an example James Whittaker shared at an October 2011 speech in Anaheim, Calif.

Testing Can Be a Matter of Context

James Bach, a co-author of Lessons Learned In Software Testing, defined context-driven testing along with Cem Kaner in 2001. He was also first to recognize that these new kinds of customer-facing testing have risks.

James Bach wants to set your testers free—but are they ready for the responsibility freedom implies?

I asked Bach what he would tell an organization considering context-driven methods. "This is an anti-authoritarian approach to testing. Testers are no longer treated as if they were shift workers in a fast food restaurant," he says. "But that creates an interesting problem when people 'resist' this change. Imagine that you open the door to a prison cell, and all the prisoner wants to do is complain about the cold air you're letting in? We are freeing people to use their judgment and skill, but that freedom can be disorienting at first.

"The freedom we talk about comes along with responsibility," Bach continues. "Testers must have the training to do good work—and then we get out of their way. It can be compared to journalism or detective work in that sense. It's largely self-managed, so they need to build credibility with their teams."

Giving testers the opportunity to build credibility also means giving them the chance to fail. For Bach, Klain and McCowatt, that chance is one worth taking.

Does it make sense for your organization? That's up to you—and your context.

Matthew Heusser is a consultant and writer based in West Michigan. You can follow Matt on Twitter @mheusser, contact him by email or visit the website of his company, Excelon Development. Follow everything from CIO.com on Twitter @CIOonline, Facebook, Google + and LinkedIn.