This article can also be found in the Premium Editorial Download "Information Security magazine: Meeting cloud computing compliance mandates."
Download it now to read this article plus other related content.
Application security has become information security's "mot du jour," as it should be since the majority of hacks purportedly occur through the application layer. The rapid increase of interest in application security is evidenced by the explosive growth in membership in groups such as Open Web Application Security Project (OWASP), and the appearance of specific certifications, such as the Certified Secure Software Lifecycle Professional offered by ISC2. And it is apparent from the recent corporate acquisitions of such application security testing players as Ounce Labs and Fortify, by IBM and HP respectively, that the big guys also are recognizing the importance of application security.
I have long been a strong advocate of ensuring that applications reflect user requirements, are engineered with security in mind, designed with security architectures, and built using secure coding practices. Such coverage goes a long way towards improving the overall security state of applications, which are commonly held to be among the most popular vectors used by those with evil intent to gain access to data and perpetrate fraud, among other crimes.
However, functionality testing and security reviews do not cover what is perhaps the greatest vulnerability area of all, namely, ensuring that applications do not authorize functions which specific users are not supposed to perform. There are good reasons for this gap. First of all, application testers are usually only interested
Here's a personal example to illustrate what I mean. Before Web applications became popular and application security became the serious issue that it is today, client-server systems were all the rage. I was asked to provide "security test scripts," which I interpreted to mean scripts that gave assurance a particular application did not allow unwanted functionality by users not authorized for those functions. The testers, who were very familiar with the application's functionality, had already created some 600 functional test scripts. I came up with some 10,000 first-order security test scripts. That is to say, having been given some training in the functionality of the application, I formulated all the potential paths through the application that a user might try. However, I did not include second, third and higher order scripts, such as when a user performs one function followed by another function and then another; that would have resulted in literally millions of potential scripts.
Even so, my proposed scripts were greeted with amusement since running 10,000 test scripts was clearly unrealistic given the cost and the time it would take. We compromised by instituting the statistical approach of sampling and confidentiality limits commonly used in manufacturing. We ran a certain number of random tests and, depending on the confidence resulting from the tests, decided whether or not to perform further tests. The net result of this approach to software assurance was that the system was extremely stable. After it had been running for about three months, we did see a second-order error resulting from a user performing a sequence of tasks that revealed more information than authorized; the cause was a failure to initialize a particular buffer. This occurrence would likely have not been detected unless a much broader series of tests had been done.
Interestingly a very similar approach was recently advocated for testing hardware in an article in the August 2010 issue of Scientific American, "The Hacker in Your Hardware." The author, John Villasenor, writes: "Because ... rogue hardware requires a specific trigger to become active, chipmakers will have to test their [threat] models against every possible trigger to ensure that the hardware is clean ... Companies [should] test as best they can, even though this necessarily means testing only a very small percentage of possible inputs. If a block [of circuits] behaves as expected, it is [then] assumed to be functioning correctly."
Environment also plays a critical role in the testing of hardware and software. Often applications are installed on a variety of platforms and infrastructures, so that testing scripts should be created for the many situations in which the software and hardware might be used. This, of course, expands the number of potential test scripts enormously.
While security professionals have promoted testing using misuse and abuse cases and fuzz testing, which involves entering random data, specific guidance as to the form of such testing and skills needed for such tests is rarely provided in sufficient detail for this type of testing to be effective. This is understandable since security professionals prefer to deal with generic approaches without getting into particular application functionality. This is also to be expected due to the enormity of the task, which several experts in nonfunctional security testing have argued. Nevertheless, I believe that some level of functional security testing needs to be done, if only on a sample basis. It is certainly better than begging off from any such testing.
C. Warren Axelrod, Ph.D., is a senior consultant with Delta Risk and research director for financial services with the U.S. Cyber Consequences Unit. He recently led a software assurance initiative for the Financial Services Technology Consortium, and was formerly business information security officer and privacy officer for the wealth-management division of Bank of America. His publications include Outsourcing Information Security and Enterprise Information Security and Privacy. Send comments on this column to email@example.com
This was first published in November 2010