×

When software designers, systems engineers, and business leaders work closely with testing teams from the earliest stages of IT development projects, the likelihood that new applications will perform as expected can increase dramatically.

Your company’s newly built customer-facing e-commerce system is about to go live. Everything is running according to plan except for one potential hitch: Because IT didn’t have sufficient time to test the performance of some critical components under peak traffic conditions, you cannot be sure how the entire system will function when you flip the switch, or what the customer experience will be.

As we have seen with recent launches of some federal and state websites built to support health care legislation, insufficient performance testing prior to launch can result in customer frustration, brand damage, and angry stakeholders. Unfortunately, it happens frequently, says Victor Soder, a director with Deloitte Consulting LLP, who specializes in systems integration and testing. “IT leaders often put this kind of testing off until too late in the development process,” he says. “By then, they may not have sufficient time to change a system’s architectural design or fix problematic code and still meet launch deadlines.”

Soder suggests that to avoid this potentially costly outcome, CIOs create an application performance program and follow a performance testing plan (PTP) throughout the system development lifecycle to help demonstrate that a new system can perform as designed under expected loads, and operate smoothly for end users.

PTPs provide a detailed, methodical road map that IT can follow to keep application development and testing efforts aligned with business objectives. “These plans can help testing teams define, develop, and test meaningful business workload scenarios that come as close as possible to an actual day in the life of the application,” says Soder.

Focusing on Performance Drivers

Application performance testing is more than just a technical exercise. To be effective, it must ultimately reveal how well new systems are performing within a business context. As such, business leaders, whose teams will ultimately use and own the new applications, should provide the testing parameters that form the basis of a PTP.

“The head of sales might say, ‘Based on our sales projections, we expect a maximum of 5,000 users per hour to visit our Web page and use three newly developed online shopping tools,” explains Arvin Ravisekar, a manager with Deloitte Consulting LLP. “With this information, members of a performance testing team would be able to identify the critical components to be tested and the thousands of transactions (and supporting data) the components should be able to support under peak traffic conditions.”

“The longer you wait to identify these and other performance drivers, the tougher it can be to test systems and address any complications that arise prior to launch,” says Soder, who suggests that CIOs consider including the following steps as they develop their testing plans:

Include testing teams from the earliest phases of planning and development. “Superior performance is rarely ‘tested into’ an application—it is designed in, up front,” says Soder. For this reason, relying on the testing team alone to identify needed test scenarios and improvements after the application is nearly built usually won’t result in superior application performance. By collaborating with the design and development teams during early-phase due diligence, the testing group can better understand the business needs driving the engineering and design of a planned application, and then develop approaches for testing the specific capabilities that address these needs.

Perform early predictive analysis. Identify early opportunities—before full production-sized configurations are in place and formal load testing gets underway—to perform focused, low-volume testing on the most important components of system performance. That way, developers can identify and address performance issues earlier in the development process.

Make the connection between service levels and system capacity. Even the most well-designed application can surprise IT after launch by consuming excessive CPU processing power or draining more system memory than anticipated. Such surprises can diminish effective system capacity and damage service levels, says Ravisekar. “It is important that testing teams regularly evaluate applications under development to determine they are on track to consume reasonable, expected levels of computing resources, while meeting performance objectives.”

Soder cautions that testing teams occasionally undermine the effectiveness of their carefully crafted PTPs by taking shortcuts and making inaccurate assumptions. “With deadlines looming, some will posit: ‘If system A has half the computing power as system B, then we can assume that system B can support twice as much customer traffic as system A,” he notes. “But you can’t know exactly how much traffic system B can support until you rigorously test it using methodologies established in the PTP.”