It’s no secret that application security presents one of the biggest risks facing organizations today. Look no further than recent headlines to see examples of the many companies that have had large compromises of data. One of my favorite infographics at Informationisbeautiful.net catalogs the most recent breaches, highlighting the number of records compromised, the type of compromise, and the companies affected. The takeaway: No one is immune, from Apple to the military.

So if application security is such a big risk, what are companies doing about it? Well, it turns out, less than zero. Data from the SANS 2015 State of Application Security report shows that a majority of survey respondents felt that app sec spend was “less than adequate” or reported that they had “no opinion” (page 15). A recent survey of security leaders on the difference between the perceived risk versus actual spend in a few areas found that organizations are overspending on network-layer defenses and underspending on the application layer (see “The Increasing Risk to Enterprise Applications,” Ponemon Institute, Nov 2015, Figure 10).

The problem is worse than it seems. Given the massive proliferation of software controlling every aspect of our lives, the probability and impact of a potential breach are increasing dramatically every day. Today, we depend on software for everything, from the mundane to life-sustaining functionality: mobile phones, cars, electric power—and that’s not even starting to consider the proliferation coming with the Internet of Things. Worse still, data from the Building Security In Maturity Model (BSIMM) shows that, while some industries have evolved maturity in many dimensions of app sec, many others have not. For example, the healthcare industry seems to be playing catch-up with financial services and software vendors (see figures on pages 26 and 27 of the BSIMM6 report).

The reality is that the companies that most need the best defenses (e.g., those guarding our sensitive medical records) may not have them. So what should you be doing about app sec? Here, with data from the BSIMM, as well as a survey of application security activities at 70 companies going back more than six years, sprinkled with some of my own anecdotal experience as a security consultant over the last 20 years, is a start at outlining common strategies for doing app sec due diligence.

The Big 5 app sec activities

Luckily for us, BSIMM provides a wealth of data on this topic. Specifically, it shows 12 app sec activities that nearly everybody does. Here’s the “most frequent 12” list, with the percentage of surveyed firms that do the listed activity:

Identify gate locations and gather necessary artifacts, 84% Identify PII (personally identifiable information) obligations, 78% Provide awareness training, 76% Create a data classification scheme and inventory, 65% Build/publish security features, 78% Create security standards, 73% Perform security feature review, 86% Use automated tools along with manual code review, 71% Drive tests with security requirements and security features, 85% Use external penetration testers to find problems, 88% Ensure that host and network security basics are in place, 88% Feed software bugs in ops back to development, 96%

These 12 actions could be a very good place to start thinking about building a sustained software security initiative (or SSI, in BSIMM-speak, a.k.a. an app sec program). But I think there is an even simpler strategy.

Take a closer look at this list and note that there are dependencies on a set of common, underlying processes:

Penetration testing Code review Training Standards Architecture analysis

(I’ve bolded these items, which I will call "the Big 5," in the list above.)

In over a decade of working with companies to build app sec initiatives, I’ve encountered “The Big 5” over and over again, albeit at different frequencies (the numbers of companies that I see doing them), and at different times (in what order they start doing them, and for how long). Here’s my rough summary of a typical progression:

Almost everyone starts by pen testing one or more apps; this may mean anything from an annual test conducted by an external firm to a regular, tool-driven, managed service. As organizations develop maturity in identifying risks via dynamic (pen) testing, they start to explore techniques that offer deeper coverage, such as code reviews (usually with a tool). Next, the devs realize that they need some help passing the pen tests/reviews, so someone asks for training; this evolves into a specialized curriculum on different topics for different roles. Over time, the combination of these gets encoded into “standards,” a.k.a. “things we check before release,” that could take several forms but usually start with secure coding standards and evolve into a list of approved “security features” (e.g., authn) that must be used. Finally, at the “advanced” end of the spectrum, adoption of design review techniques such as architecture risk analysis and threat modeling start to appear, as companies realize that implementation-level assessments (pen testing and code review) don’t find everything and in fact may be overlooking the most serious flaws.

Is this the best way to do app sec? I don’t know—but it’s what I see almost everyone doing. In fact, I would argue that the Big 5 are, in reality, as close to an app sec security “standard of care” as we have today.

The 6th thing

It’s important to recognize that all the things in the Big 5 are activities. Many organizations simply start doing these things because they heard about them from someone they trust, the Internet, etc.

But savvy organizations at some point realize that they also need good management to scale and sustain activities such as the Big 5. For example, the Big 5 are heavy on assessment, so those activities will likely produce a big pile of bugs that absolutely need fixing. (This is a good problem to have—temporarily! Don’t fall into the hamster wheel of pain called “penetrate and patch”!) In addition, many other activities in the “most frequent 12” depend on supporting activities for long-term survival (we’ll show explicit linkage in a moment). This includes things such as formal organizational support and roles. It also means deeper integration into the “workflow” of collaborating functions such as risk, audit, incident response, operations, defect management, and so on.

Let’s talk about these “6th things” in more detail, organized around two of the more important aspects that I’ve observed at numerous companies:

Organization

Integration

Organizing for app sec

First, let’s talk about organization, because people are the key ingredient for long-term success with any sustained initiative. Two of my colleagues at Cigital, Gary McGraw and Caroline Wong, published an interesting survey of software security team structures within the BSIMM community that provides some great insights into what works and what doesn’t in this area. I’ve summarized their data in Table 1.

Org. Struct. Score SSG Sat Devs Ratio Services 36 7 7 4,825 0.3% Policy 41 10 16 8,630 0.3% Hybrid S-P 46 16 16 2,300 1.4% Bus. Unit 31 5 27 1,650 1.9% Management 64 19 175 10,833 1.7% Everyone 37 15 30 4,190 1.1 %

Table 1: Summary of McGraw and Wong data on organizational approaches to software security.

This is a busy table, so let me take some time to explain the terms used. Gary and Caroline categorized the organizational structures they found among the BSIMM companies into five types: Services, Policy, Business Unit, Hybrid Services and Policy, and Management. You can read their full article for the detailed descriptions of each of these groups, but for now, I’ll summarize by saying that these different approaches run a spectrum between distributed and centralized structures.

These approaches are listed in the first column, “Org. Struct.” The other columns list the following data points, averaged across each group of companies adopting a given organizational approach: “Score” is the average raw BSIMM score (a very rough proxy for more software security activity), “SSG” is the average number of people in the software security group (BSIMM-speak for the team directly accountable for software security), “Sat” is the average number of people indirectly responsible for software security (BSIMM calls this the “satellite” group or network), “Devs” refers to the average number of developers “covered” by each SSG + satellite, and “Ratio” enumerates the fraction of software security people (SSG + satellite) to developers.

So what to make of all this? Here are my key interpretations of this data (also summarizing additional points made by Gary and Caroline in their original article):

All of the companies in this study formalized app sec roles, whether direct or indirect responsibilities; “making it someone’s job” seems to be an important step.

“Distributed” organizational structures tend to have a higher number of app sec people (direct and indirect) per developer, albeit with lower “raw” scores initially.

Highly centralized app sec teams may start strong but don’t scale well.

Indirect relationships (e.g., deputized developers or “security champions”) are key to scaling.

Last but not least, there are outliers not shown in this data (the “platypuses”) that don’t fall into any of these categories. As always with organizational stuff, you should take care to understand your institutions and culture, and adapt accordingly.

Let me sprinkle this data-driven discussion with some of my anecdotal observations. There is probably no one best way to organize, but rather a set of patterns that have a greater degree of success in given scenarios. For example, in companies that are more hierarchically structured, such as financial services, a top-down approach seems to work. But in more loosely structured (dare I say “agile”?) software companies, bottom-up creates greater momentum. I tend to favor the bottom-up approach, because I see this as having greater resilience.

Also remember that no matter what approach you take, you will almost certainly face a concerted challenge finding qualified people. Finding dev + security in the same person is like finding purple + unicorn in the same horse.

Integrating with other teams/capabilities

In my experience, the Big 5 are rarely done in isolation from other organizational teams or capabilities. There are many activities that support, connect, and enhance the Big 5, when you survey companies that have SSIs/app sec programs.

Table 2 shows data illustrating the highest-frequency “dependencies” of common software security activities on other teams/capabilities within the organization, based on my own analysis of a subset of BSIMM data, and some anecdotal experience.

Team/Capability % Information Security 25 GRC 23 Defect Management 18 App Sec Portal 18 Incident Response 14 Project Management 14 Legal 14 Vendor Management 7

Table 2: Common integration points between app sec and other teams, by percentage of software security activities that depend on them.

Here are a few of my observations on this data:

Unsurprisingly, partnering with the infosec group turns out to be a significant portion of the dependencies, not the least of which is to ensure that app-hosting environments are safe.

Governance, risk, and compliance are often leveraged by app sec, especially as things get encoded into process and culture, and thus need to be integrated into ongoing compliance routines, etc.

Defect management is the “glue” that drives security into organizational workflow. Bugs are the coin of the realm for development organizations, and all rivers flow into it (pen testing, code review, arch analysis, ops bugs, etc.). As we’ll see with metrics, few programs get far without understanding and managing this stream.

An internal app sec “portal” is one of the best tools I’ve seen to define, coordinate, and communicate an app sec program. I’ve seen them range from sophisticated (automated app inventory/state tracking) to rudimentary (policy document repository), but all are effective to their own degree.

Incident response is a critical integration point for those bugs that require real-time collaboration and remediation due to an impending incident.

Project management of course needs to be consulted and partnered closely on integrating security into development lifecycles.

There are several touchpoints with legal, including identifying regulatory requirements that can drive app sec priorities.

Last but not least, especially for organizations that acquire much of their software, it’s important to coordinate with vendor management to ensure that things such as the Big 5 are driven into contracts.

Just do it

I hope the The Big 5 +1 concepts have shown that starting and scaling an SSI/app sec program is straightforward. Don’t make excuses—just do it!

Start doing something about software/app security today if you have not already. Picking from the Big 5 is a great place to start.

If you’ve already got something started, add something you’re not already doing (e.g., more from the Big 5).

Keep doing it, and in parallel add the 6th thing, management, to consolidate, scale, and sustain your initiative.

Good luck, and be sure to share your must-dos on application security in the comments below.

Keep learning