Nobody really writes their own code anymore, right? We go out to GitHub, download some libraries, avoid recreating unnecessary wheels, and package those wheels together along with our own glue to create new software. Then we download a half dozen front-end frameworks to make it all pretty and responsive and we're off the races. In my review of apps, both in my company and others, I've found that more than 90% of the code that makes up an app these days is something we borrowed, not wrote ourselves.

Most of us scan our own code for flaws with static analysis tools, but what about all the stuff we didn't write? How do we know what's actually there? Once you find out what's in there, what actions do you take to either clean it up or keep it fresh? How do you avoid getting pwned because you let a nasty in the backdoor with that whiz-bang library that does the really cool thing you couldn't live without?

The old way

I've been programming for almost 20 years now, long enough to see the evolution from traditional waterfall or spiral models of program planning to Xtreme Programing, Agile, and now DevOps models.

In the past, long development cycles, lack of any real training, and a vacuum of tools to identify security-based defects meant that security assessments were done primarily in later stages of the software development lifecycle and mostly as a manual exercise. Usually the impetus to trigger a review was related to an audit or a customer asking for assurance of their data in your systems (which happens even less frequently).

Given this infrequent and ad hoc approach toward security assessment, security-based defects and the tests to find them were often deprioritized. Information security groups focused on "findings," running tools to produce reports when asked to assuage the auditors that things were okay. Features and functionality took priority over resolving a defect that normal users would never see, and "nobody" would ever really see them, right?

Add to this scenario the fact that, even though new technology workers were added to the market daily, few were trained in defensive coding practices, and you can see how we could end up with a problem.

The good news is there are now real strategies to bring clarity to the problem and ways to resolve them.

The new way

Today's world is one of automation and continuous iteration. We call the process "DevOps" because it melds the development of software and the definition and automation of infrastructure to create models for deployment and operations that are self-enabled.

When we add security to that, we apply the same expectation that we automate everything and define the patterns and process such that they can be repeated continuously. We end up with what I like to call "DevSecOps."

The key in this new approach is to "shift left," moving security testing and open source composition efforts away from late-stage production and towards design and development.

Just like in DevOps, where developers are enabled to define software-based architectures, version them, and deploy them using automation, DevSecOps gives those same developers tools, technology, and processes to do the same for software security.

Step 1: Start in Design

The biggest way open source gets off the rails quickly is by not considering the app's makeup before we start coding. Are you still using a two-year-old copy of Struts for your new app just because it's what's already on your workstation, left from the previous 10 projects you did? Each time you start a new project, make sure you're using the freshest, most trusted version of the frameworks you rely on. Use free or inexpensive tools like SourceClear to identify the bill of materials (BOM) in your app and be sure it makes the grade before you start work. It will save you headaches later.

Step 2: Automate All.The.Things.

As a developer, few things can be more irritating than someone coming to me with yet another thing to do in my already overloaded day. If you expect developers to use a tool, look on a website, or ask someone permission each time they deploy, they will inevitably find a way to avoid it.

On the other hand, if you can automate the process of running that thing, or checking that list, or notifying that group, then developers stay focused on what makes the company money or adds value for a customer. InfoSec likes to say "security is everyone's job" but often forgets that adding value comes first. If we can't add value for our customers or our shareholders, there won't be anything to secure.

The key here is to do it out-of-band and make it transparent. It must be asynchronous and invisible or it will become somebody's pain point.

Step 3: Develop "good citizens" not "good builds"

Last, for DevSecOps to truly be achieved, we have to work to change the mindset around InfoSec policy and deployment processes related to that.

Most deployment models that involve security have in the past looked something like:

Design –> Code –> Integration test –> QA –> InfoSec Review –> Production

But, when DevOps cycles are potentially only hours or even minutes, how do you get InfoSec to review before production? The answer: You don't.

Let me put it this way: If developers are getting good information early on because you automated static code analysis, and you provided an automated method for generating a BOM for your open source frameworks, and you have been providing this early in the software development lifecycle since development or even design, then what exactly are you testing in the build?

You're simply asking whether they took action on what they already knew.

Now ask yourself the question, "Do you trust them?"

See, if you're providing information early enough, when it comes time for deployment "InfoSec Review" can actually be REMOVED!

WHAT!?

It's true. At this point your change control process can simply ask:

"Have the automated security reviews for this application been completed at every stage?"

"Have developers been resolving critical defects as expected on a consistent basis?"

"Did the last completed assessment measure up to our expectations (policy)?"

If you can answer yes to these questions, then what you're doing is trusting that dev teams are acting as model citizens, being careful to produce good quality, and (given good intel) behaving in a responsible way. Our new model looks like this:

Informed Design –> Automated code review –> IT –> QA –> Good Citizen? –> DEPLOY!

Jeremy Anderson will be giving a more in-depth review of this process in his talk, Securing The Other 97% of Your App, at OSCON 2017 in Austin, Texas. If you’re interested in attending the conference, use this discount code when you register: PCOS.