Dynamic languages like JavaScript, Ruby, Python, Elixir and Clojure (just to name a few) have taken the Web by storm. The associated development and operations methodologies found in concert with use of these languages, especially on the Web -- that is, Agile and DevOps -- are themselves very dynamic. The preliminary impact of dynamic languages and larger-scale dynamism on security, especially software security, has been problematic. But a new approach to security is emerging from the dynamic soup -- and it holds some promise.

How much will the new approach to security in the dynamic programming paradigm help? Are there domains in which this approach should be avoided?

Problems with dynamism Old school software security relies (a bit too much we'll admit) on looking for defects in software throughout the SDLC as it is being built. Software security touch points like code review and architecture analysis rely on looking over system artifacts. Problems occur when the artifacts are either not produced or are otherwise out of date. Turns out you can't do an architecture analysis if you have not written down your architecture in a meaningful way. Likewise, if you don't have code around for any reasonable period of time, it's pretty hard to check it for bugs. Constant change and flux is a characteristic of systems built with dynamic programming approaches and their associated DevOps stances. Of course, with all software, things change and evolve over time, but with dynamic software and DevOps, things change and evolve all of the time on purpose. Security testing in an always-changing environment can be a problem unless the security testing is itself agile and dynamic. Because of the way dynamic programming works, the dynamic programming paradigm makes code review a challenge. Code review in this situation is not impossible, of course, but it requires laser focus, fast turnaround, and a bit too much emphasis for comfort on what is changing (ignoring change ripples throughout the code base). We saw this problem crop up almost twenty years ago in Java when the notion of dynamic class loading had a direct impact on the idea of bytecode verification. The problem was that until a class was actually loaded, part of the verification function could not be completed. This led to a verifier that sometimes had to wait around until runtime to complete its work. Java mostly papered over this issue, but it resulted in some spectacular security failures. Of course the relationship between Java and JavaScript is pretty much confined to the sequence of four letters: J, A, V and A. But JavaScript itself has become the de facto programming language for the Web, especially on the client side. This dynamism becomes a problem for security when an assembly is not really put together until it is ready to run, and many of the parts that are assembled are fetched in real time from all over the Web. The problems here are pretty obvious: Until the assembly has been put together, checking it for security problems can't really take place. Or more simply, code review only works when what you end up running is the same code that you checked during security analysis. That means a new approach to security -- an approach that takes into account massive dynamism -- is required. Modern tooling for secure code review is just starting to take dynamism into account, but it is not yet in widespread use. Of course, finding potential security problems in dynamic systems is one thing, and actually fixing them is something else entirely. There's quite a bit of content around how security issues can be found in dynamic systems, but not enough having to do with how they are fixed or any related pitfalls with fix approaches.

Managing the chaos by becoming chaotic By turning entropy (and unpredictability) associated with dynamism on its head, we can salvage some aspects of security in dynamic systems. There are several ways this can work. As it turns out, moving targets are harder to hit than targets that stand completely still. So massive dynamism that constantly churns targets can lead to a security advantage in some situations. This is especially true if you are willing to allow parts of your system to fall prey to attacks as a side effect of saving the group as a whole. If you've seen the schooling behavior of "bottom of the food chain" fish when faced with a predator, you know what I mean here. By behaving as an emergent system with unpredictable dynamics, most of the members of a school of fish can evade predators, even as some individual fish are eaten. Google uses this moving target approach on its constantly evolving ecosystem. Instead of running the exact same image on all client machines in Google's massive install base (and laboriously patching them all in lockstep fashion), Google mixes things up, does experiments and uses dynamism to its advantage. The application image moves and there are multiple images going at once. Sure, a few million users may suffer temporary setbacks, but by and large Google users are better protected on the whole. Note that this conception of "running faster than the attacker" can be applied at different levels inside of a system. For example, distinct applications, or perhaps even code modules, might leverage this approach as well. Another approach that leverages dynamism towards security advantage involves making tests and test cases as dynamic and automated as possible. For several years, Netflix has used a system called Chaos Monkey based on the notion of fault injection in real running systems. The notion is to fail often by testing all the time on production systems, fix problems that are discovered, and in this way create systems that are more resilient to failure. Netflix also has a system called Security Monkey that monitors and probes security configurations (which in a DevOps paradigm constantly change). Security monkey was created expressly because of dynamism. According to Netflix: Code is deployed thousands of times a day, and cloud configuration parameters are modified just as frequently. To understand and manage the risk associated with this velocity, the security team needs to understand how things are changing and how these changes impact our security posture. The Azure team at Microsoft also uses automation in the form of malware injection and penetration testing to probe its dynamic cloud services environment. Bottom line, when it comes to security and dynamic systems, is this: automation, constant testing and judicious use of fault injection work even as "moving target" advantages accrue. There is plenty of room for more exposition here about tooling and testing in dynamic systems. For now, we'll leave you with this thought: Ultimately, design and code review tools need to be refactored for dynamic programming paradigms.