Two major challenges face software security as a discipline that finds itself in its gangly adolescence. The first is making sure that practitioners fix the security defects they find, no matter what the category. The second is scaling to cover enterprise-wide portfolios (completely, and perhaps in an enlightened risk-based fashion). Fortunately, we have been making progress on both of these challenges.
Fix what you find
Application security technologists and industry analysts endlessly debate methods for finding security defects in software. SAST, DAST, IAST and RASP are all acronyms sanctioned by Gartner Inc., and are thrown around in this battle. Sadly, there is no ultimate winner technologically speaking. That's because each approach has strengths and weaknesses, and the methods tend to complement each other.
SAST, or static application security testing, has come a long way since Cigital introduced its ITS4 tool in 1999. Scaling SAST is a challenge that can be met two ways. The first is through building a factory around an industrial-strength tool like Coverity, IBM Appscan Source or HP/Fortify. The second is to adopt an IDE-based desktop tool like Cigital SecureAssist. Aetna CISO Jim Routh and I discuss SAST and scalability elsewhere on SearchSecurity. (Incidentally, the story of SAST and technology transfer is a very good one.)
DAST, or dynamic application security testing, is about dynamic black-box testing from the outside looking in. Generally speaking, DAST tools are only available for software that uses simple communications protocols like HTTP. Dynamic testing of Web applications makes good sense, but DAST has a limited target by its very design. That means DAST is powerful for Web applications but not very useful for most other types of software.
IAST, or interactive application security testing, combines both dynamic and static approaches into an interactive solution. As should be fairly obvious, early IAST approaches are (like DAST) limited to Web applications. If you have only Web apps in your portfolio, IAST is a good approach to get a handle on.
The new kid on the block, RASP, or runtime application self-protection, is about re-writing software so that it can be monitored during runtime. This is an idea with a long history and one very major flaw: If you re-write software at the last minute, don't be surprised when you get blamed if the software fails, even if your re-write has nothing to do with the failure. I always say, "Don't be the last one to touch the potato." That advice applies in spades to RASP. Just for the record, RASP also has efficiency taxes -- 1% to 10% is the range to expect -- that need to be accounted for, as well. Lastly, if you allow RASP to stop code running in production during an active attack, you have just created a dandy denial of service engine. Anyway: shiny new.
As it turns out, debating the alphabet soup of technical approaches is pretty silly, especially when it comes to SAST, DAST and IAST. For years, practitioners in the field have combined all of these techniques in a variety of ways to solve hard problems in software security. One approach alone will almost never be the right answer.
The human element
The dirty little secret of software security reveals why: A tool by itself, especially a simple tool that only looks for a handful of bugs, cannot solve the software security problem. That's because if you don’t actually fix the security defects you find, you are not really helping from a security perspective. None of the alphabet soup tools fix defects without a smart human in the loop. They are all geared toward finding bugs in slightly different ways (we won’t even bring up design flaws). And yes, this truism applies all the more in the web application security subdomain.
If we step back and consider the alphabet soup in this light, it is very easy to see that SAST offers a serious advantage over DAST or any other dynamic testing approach. When it comes to fixing software, if you know where in the code a problem exists, it is much easier to fix that problem. If, on the other hand, you only know which runtime glob appears to be at fault during runtime testing, fixing the defect is much more of a challenge. As in physics, a white box experiment is always superior to a black box experiment.
Since developers are ultimately responsible for creating software with as few defects as possible, any tool that helps developers as directly as possible will be the most useful. At this point in the development of the field, only a small number of easy bugs can be completely automated away. The rest will require development involvement to fix things.
In summary, make sure you think about actually fixing software as you devise clever new ways to detect defects. If your tools vendor is not talking about how things get fixed, there may be a reason.
Scale to the entire portfolio
All that said, tools of all kinds are essential to software security. Scale is essential here. In order to cover an entire portfolio for most companies (that is, all software applications), automation is really the only way to go. For what it's worth, that's just as true for design analysis as it is for bug finding tools, though we have our work cut out for us when it comes to design. (See: McGraw on the IEEE Center for Secure Design.)
For too long, risk management has been (mis)used to justify looking at only a handful of "high risk" applications while discounting and ultimately ignoring the vast majority remaining. Though this may sound like a good idea for reasons of efficiency, it turns out that attackers go after the darnedest things. In today's attack climate every piece of software in your portfolio should have some level of testing. Attackers are going after any weak link (including HVAC vendors and other minor suppliers). The new weak link is neglected application software.
The time has come for tools and services that sweep an entire portfolio leaving no stone unturned in the hunt for basic defects.
Of course, these kinds of solutions can still be risk based. Have a high-risk, Internet-facing application? Do a hardcore architectural risk analysis. Review its code with a heavy static analysis tool. Train its developers to maintain it in a secure fashion. Perform penetration testing on it and do the whole she-bang, but don't turn the dial down to zero for low risk apps. Use automated testing to find and fix easy bugs. Arm developers with IDE-based static tools (and train them too).
As you plan and execute your software security initiative (and measure it with the BSIMM), make sure that scalability plays a major role.
Learn more about BSIMM as a framework for measuring your software security posture.