Gary McGraw

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Gary McGraw discusses the security risks of dynamic code

Gary McGraw says secure software development gets tricky when your programming environment shifts like sand.

Dynamic languages like JavaScript, Ruby, Python and Clojure (just to name a few) have taken the Web by storm. The associated development and operations methodologies found in concert with use of these languages, especially on the Web -- that is, Agile and DevOps -- are themselves very dynamic. The preliminary impact of dynamic languages and larger-scale dynamism on security, especially software security, has been problematic. But a new approach to security is emerging from the dynamic soup -- and it holds some promise.

How much will the new approach to security in the dynamic programming paradigm help? Are there domains in which this approach should be avoided?

Problems with dynamism

Old school software security relies (a bit too much we'll admit) on looking for defects in software throughout the SDLC as it is being built. Software security touch points like code review and architecture analysis rely on looking over system artifacts. Problems occur when the artifacts are either not produced or are otherwise out of date. Turns out you can't do an architecture analysis if you have not written down your architecture in a meaningful way. Likewise, if you don't have code around for any reasonable period of time, it's pretty hard to check it for bugs.

Constant change and flux is a characteristic of systems built with dynamic programming approaches and their associated DevOps stances. Of course, with all software, things change and evolve over time, but with dynamic software and DevOps, things change and evolve all of the time on purpose. Security testing in an always-changing environment can be a problem unless the security testing is itself agile and dynamic.

Because of the way dynamic programming works, the dynamic programming paradigm makes code review a challenge. Code review in this situation is not impossible, of course, but it requires laser focus, fast turnaround, and a bit too much emphasis for comfort on what is changing (ignoring change ripples throughout the code base).

We saw this problem crop up almost twenty years ago in Java when the notion of dynamic class loading had a direct impact on the idea of bytecode verification. The problem was that until a class was actually loaded, part of the verification function could not be completed. This led to a verifier that sometimes had to wait around until runtime to complete its work. Java mostly papered over this issue, but it resulted in some spectacular security failures.

Of course the relationship between Java and JavaScript is pretty much confined to the sequence of four letters: J, A, V and A. But JavaScript itself has become the de facto programming language for the Web, especially on the client side. This dynamism becomes a problem for security when an assembly is not really put together until it is ready to run, and many of the parts that are assembled are fetched in real time from all over the Web. The problems here are pretty obvious: Until the assembly has been put together, checking it for security problems can't really take place. Or more simply, code review only works when what you end up running is the same code that you checked during security analysis.

That means a new approach to security -- an approach that takes into account massive dynamism -- is required. Modern tooling for secure code review is just starting to take dynamism into account, but it is not yet in widespread use.

Of course, finding potential security problems in dynamic systems is one thing, and actually fixing them is something else entirely. There's quite a bit of content around how security issues can be found in dynamic systems, but not enough having to do with how they are fixed or any related pitfalls with fix approaches.

Managing the chaos by becoming chaotic

By turning entropy (and unpredictability) associated with dynamism on its head, we can salvage some aspects of security in dynamic systems. There are several ways this can work.

As it turns out, moving targets are harder to hit than targets that stand completely still. So massive dynamism that constantly churns targets can lead to a security advantage in some situations. This is especially true if you are willing to allow parts of your system to fall prey to attacks as a side effect of saving the group as a whole.

If you've seen the schooling behavior of "bottom of the food chain" fish when faced with a predator, you know what I mean here. By behaving as an emergent system with unpredictable dynamics, most of the members of a school of fish can evade predators, even as some individual fish are eaten. Google uses this moving target approach on its constantly evolving ecosystem. Instead of running the exact same image on all client machines in Google's massive install base (and laboriously patching them all in lockstep fashion), Google mixes things up, does experiments and uses dynamism to its advantage. The application image moves and there are multiple images going at once. Sure, a few million users may suffer temporary setbacks, but by and large Google users are better protected on the whole.

Note that this conception of "running faster than the attacker" can be applied at different levels inside of a system. For example, distinct applications, or perhaps even code modules, might leverage this approach as well.

Another approach that leverages dynamism towards security advantage involves making tests and test cases as dynamic and automated as possible. For several years, Netflix has used a system called Chaos Monkey based on the notion of fault injection in real running systems. The notion is to fail often by testing all the time on production systems, fix problems that are discovered, and in this way create systems that are more resilient to failure.

Netflix also has a system called Security Monkey that monitors and probes security configurations (which in a DevOps paradigm constantly change). Security monkey was created expressly because of dynamism. According to Netflix:

Code is deployed thousands of times a day, and cloud configuration parameters are modified just as frequently. To understand and manage the risk associated with this velocity, the security team needs to understand how things are changing and how these changes impact our security posture.

The Azure team at Microsoft also uses automation in the form of malware injection and penetration testing to probe its dynamic cloud services environment.

Bottom line, when it comes to security and dynamic systems, is this: automation, constant testing and judicious use of fault injection work even as "moving target" advantages accrue. There is plenty of room for more exposition here about tooling and testing in dynamic systems. For now, we'll leave you with this thought: Ultimately, design and code review tools need to be refactored for dynamic programming paradigms.

A range of security assurance options

Of course, constantly evolving and changing code does not work in all domains. Take nuclear power plant control code or airplane control code, for example. The best way to secure critical code like that remains to eschew complexity and embrace formalism to whatever extent possible. Asking questions like, "What is the smallest machine I can use to solve my control problem?" is a good idea. Extensibility is the enemy.

Dynamic programming language paradigms are thus to be found at the "loose" end of the security assurance range, where automation and dynamism are the most useful. At the dynamic end, you run like crazy and hope to stay out in front of attackers. On the other end of the range are formalism, provability and simplicity; stodgy and careful engineering rule the day on the high assurance end. The trick in the real world, where your code probably resides, is determining which parts of the whole system should exist in which parts of the assurance range.

Next Steps

Learn more about using your own risk management framework when making software development decisions.

This was last published in August 2015

Dig Deeper on Software Development Methodology



Find more PRO+ content and other member only offers, here.

Join the conversation


Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Are you concerned about the programming languages used in your organization?
After an excellent treatment of the vagaries of protecting dynamic code, Gary ends by observing that dynamic approaches are not appropriate in all cases.

That immediately raised the question in my mind: Are the development staff (who make the choices about technical platform) in sync with (or even communicating with) the corporate risk management and security people on this topic?

If our customers' PII or financial information is at stake, is a school-of-fish approach to security acceptable?  Are we OK with *some* of our customers' information potentially being exposed?

I'd bet that these sorts of important discussions are rare indeed!

I agree completely.  Good communication between dev, security and risk management is critical.  Technical choices have a very big impact on security posture.
@Alan - I know that, historically, our development staff has not been in sync or in contact with corporate risk management or security. Once we adopted a Scrum approach, they were pretty much given carte blanche to select and use the tools, including languages, that they wanted to. This led to additional security risks with an explosion of technologies (at last count nearing 600 different technologies, not counting the dependencies those technologies have, in our stack) that are not updated or patched as needed. I anticipate that, with the addition of the CISO role here and his focus on identifying existing security risks, that development will soon be communicating and in sync with security.
One would hope that happens mcorum.  Many firms today have explicit software security initiatives that enable better understanding of tech stacks, security debt, etc.  See the BSIMM 
You'd think formalism and provability would be doing really well in airlines, but turns out they still can't get their processes right. They still make stupid mistakes like having access to computing systems on commercial airliners under passenger seats with a default password. Maybe it's just me, but somehow I'm less worried about code than process.
I wanted to agree with this article, but I failed to see how dynamic languages were actually a problem, given the solutions offered.  

I think it makes sense to scan code for static issues, and this can be done not just then, I kind of felt this article is dancing around the topic a bit :/
Is that the airlines fault, or the makers of the aircraft?  I never got a clear picture of that problem.
hi Veretax, there are many firms who have already adopted static analysis solutions for non-dynamic code that will not work in the new world. The article was mostly aimed at those firms. You are right that the suggested approaches can help solve the problem.