How much will the new approach to security in the dynamic programming paradigm help? Are there domains in which this approach should be avoided?
Problems with dynamism
Old school software security relies (a bit too much we'll admit) on looking for defects in software throughout the SDLC as it is being built. Software security touch points like code review and architecture analysis rely on looking over system artifacts. Problems occur when the artifacts are either not produced or are otherwise out of date. Turns out you can't do an architecture analysis if you have not written down your architecture in a meaningful way. Likewise, if you don't have code around for any reasonable period of time, it's pretty hard to check it for bugs.
Constant change and flux is a characteristic of systems built with dynamic programming approaches and their associated DevOps stances. Of course, with all software, things change and evolve over time, but with dynamic software and DevOps, things change and evolve all of the time on purpose. Security testing in an always-changing environment can be a problem unless the security testing is itself agile and dynamic.
Because of the way dynamic programming works, the dynamic programming paradigm makes code review a challenge. Code review in this situation is not impossible, of course, but it requires laser focus, fast turnaround, and a bit too much emphasis for comfort on what is changing (ignoring change ripples throughout the code base).
We saw this problem crop up almost twenty years ago in Java when the notion of dynamic class loading had a direct impact on the idea of bytecode verification. The problem was that until a class was actually loaded, part of the verification function could not be completed. This led to a verifier that sometimes had to wait around until runtime to complete its work. Java mostly papered over this issue, but it resulted in some spectacular security failures.
That means a new approach to security -- an approach that takes into account massive dynamism -- is required. Modern tooling for secure code review is just starting to take dynamism into account, but it is not yet in widespread use.
Of course, finding potential security problems in dynamic systems is one thing, and actually fixing them is something else entirely. There's quite a bit of content around how security issues can be found in dynamic systems, but not enough having to do with how they are fixed or any related pitfalls with fix approaches.
Managing the chaos by becoming chaotic
By turning entropy (and unpredictability) associated with dynamism on its head, we can salvage some aspects of security in dynamic systems. There are several ways this can work.
As it turns out, moving targets are harder to hit than targets that stand completely still. So massive dynamism that constantly churns targets can lead to a security advantage in some situations. This is especially true if you are willing to allow parts of your system to fall prey to attacks as a side effect of saving the group as a whole.
If you've seen the schooling behavior of "bottom of the food chain" fish when faced with a predator, you know what I mean here. By behaving as an emergent system with unpredictable dynamics, most of the members of a school of fish can evade predators, even as some individual fish are eaten. Google uses this moving target approach on its constantly evolving ecosystem. Instead of running the exact same image on all client machines in Google's massive install base (and laboriously patching them all in lockstep fashion), Google mixes things up, does experiments and uses dynamism to its advantage. The application image moves and there are multiple images going at once. Sure, a few million users may suffer temporary setbacks, but by and large Google users are better protected on the whole.
Note that this conception of "running faster than the attacker" can be applied at different levels inside of a system. For example, distinct applications, or perhaps even code modules, might leverage this approach as well.
Another approach that leverages dynamism towards security advantage involves making tests and test cases as dynamic and automated as possible. For several years, Netflix has used a system called Chaos Monkey based on the notion of fault injection in real running systems. The notion is to fail often by testing all the time on production systems, fix problems that are discovered, and in this way create systems that are more resilient to failure.
Netflix also has a system called Security Monkey that monitors and probes security configurations (which in a DevOps paradigm constantly change). Security monkey was created expressly because of dynamism. According to Netflix:
Code is deployed thousands of times a day, and cloud configuration parameters are modified just as frequently. To understand and manage the risk associated with this velocity, the security team needs to understand how things are changing and how these changes impact our security posture.
Bottom line, when it comes to security and dynamic systems, is this: automation, constant testing and judicious use of fault injection work even as "moving target" advantages accrue. There is plenty of room for more exposition here about tooling and testing in dynamic systems. For now, we'll leave you with this thought: Ultimately, design and code review tools need to be refactored for dynamic programming paradigms.
A range of security assurance options
Of course, constantly evolving and changing code does not work in all domains. Take nuclear power plant control code or airplane control code, for example. The best way to secure critical code like that remains to eschew complexity and embrace formalism to whatever extent possible. Asking questions like, "What is the smallest machine I can use to solve my control problem?" is a good idea. Extensibility is the enemy.
Dynamic programming language paradigms are thus to be found at the "loose" end of the security assurance range, where automation and dynamism are the most useful. At the dynamic end, you run like crazy and hope to stay out in front of attackers. On the other end of the range are formalism, provability and simplicity; stodgy and careful engineering rule the day on the high assurance end. The trick in the real world, where your code probably resides, is determining which parts of the whole system should exist in which parts of the assurance range.
Learn more about using your own risk management framework when making software development decisions.