How to develop software the secure, Gary McGraw way
A comprehensive collection of articles, videos and more, hand-picked by our editors
Marcus Ranum: Do you think the emphasis on mega-frameworks like Google Toolkit, Ruby or (insert favorite Web2.0
technology here) is going to improve the state of software security, make it worse, or be neutral? I'm really torn between writhing with discomfort at the idea of these large code-masses that are being used in lots of important places -- it's just too complicated to get it all right!
Gary McGraw: Both. The gigantic frameworks themselves can make analysis of a system that includes them a lot harder. If you think about automated static analysis for code review, the frameworks lead to a big 'whack-a-mole' game: The data flow goes in and pops back out in any number of surprising places. On the other hand, if you do the right thing from a static analysis perspective, you can sometimes pre-compute where the mole is going to pop back out and use that to your advantage. Frameworks can help with security too -- enterprises that create frameworks of their own, and apply those consistently for their developers have been having good luck. That's because the notion of standardization within a business is a good way of getting away from the bespoke (build it by hand slightly different each time) nature of software within an enterprise.
Marcus: So you think we're getting the main value out of code reuse? That's pretty much what the software engineering guys were saying would happen, back in the 1980s. Is it paying off?
Gary: I think in the case of code and particular bugs, yes. That's because the frameworks, in my mind, have more to do with code than with architecture.
Marcus: Can you elaborate on that a bit? It seems like architecture is always going to be fairly purpose-specific -- so, short of having a "framework for a Web banking app" that's basically COTS plus some tweaking required; programmers are pretty much doomed to have to build their code upward from basic building blocks. The building blocks get bigger and more powerful, so now they're fully rendered graphical interfaces, or a database for storing formatted objects, but the innovation has to always happen at the level of architecture or you're just producing "me too" applications. I don't see how you can make a framework that will prevent you from making business logic mistakes.
Gary: One of the other problems, which is one that Ross Anderson and others have pointed out for years, is the notion of trying to program "Satan's Computer." You can have all sorts of perfectly constructed components and put them together into a disaster of an insecure system. That's why we joke that the best software security in the world would involve taking away somebody's keyboard.
Marcus: You winced when I said "business logic mistakes" -- did I misspeak?
Gary: There are two kinds of defects in software that lead to security problems. One is bugs: did something stupid with printf( ) or made an 'off by one' error. Such bugs are localized in code and can be analyzed pretty easily: "Marcus needs to learn how to wield printf( )" or even "let's search our entire code-base for uses of printf( )." Then there are flaws: architectural problems that are not found in the code -- they're design issues.
Over the years, we've gotten quite good at finding bugs, but we're still not so good at finding flaws. Making the flaw-finding process automatable or at least cheap enough that it doesn't take experienced guys to find them is our current challenge. We've gotten so good at finding bugs that we forgot the split is about 50/50 between bugs and flaws. So, when we get excited because we've found and fixed a lot of bugs, what we've really done is gotten a better measure of how bad things really are. Let me give you a good example of a flaw that I've seen in real code, which we couldn't detect using bug-hunting techniques: forgot to authenticate user. You can do code reviews all day and you'll never catch that one.
Marcus: I know this is one of those "grey-bearded old programmer" questions, but what about the availability of code quality tools? It seems the newer stuff doesn't have much in the way of CASE (Computer Assisted Software Engineering -- remember that?) tools. Back in the '80s, we had these things called "debuggers" that don't appear to even exist for Web apps. I've written about some of my experiences working with SABER-C, a C language interpreter that used to do fantastic error-checking -- I used it as a checker, bug-squasher, and regression-testing tool. In fact, I still have an old SPARC with a copy of SABER-C that I keep in case I ever need to do any more C coding. These were tools some of us learned we couldn't live without -- but the Web2.0 generation seems comfortable with "hit reload and if it looks like it works, put it into production!" That's got to have an impact on security.
Gary: I agree, but it sort of depends on your environment. Some of the IDEs have some beautiful stuff built in, but usually you have to know the capabilities are there and to turn them on. But I agree with you, there's a bunch of stuff we built for understanding software long ago and ironically, the attackers are using it to greater advantage than the people who should be using it to understand the software they are building! A case in point that you brought up: debuggers. My favorite example, though, is coverage tools. If you talk to most QA people and say, "Hey, do you guys use coverage tools?" and they look at you like a cow at a new gate, "Whuut? Huh?" A coverage tool helps you determine which parts of code you're running during a test. So, it gives you some insight into how good your tests are. Coverage also turns out to be very helpful for attackers. Suppose you know that there's a certain potentially vulnerable system call way down there in the code (something like lstrcpy( ) in win32), your next job is to figure out how to create a control flow that will tickle that bug -- a coverage tool is super for doing that.
Marcus: What do you think about "fuzzing?" I was just at the RSA conference a couple weeks ago and there were products there that do Web application testing using that technique. I guess you point the box at a target and it tries to inject stuff into every Web form and see what happens, and so forth. Is this just another 'badness-o-meter' or does it tell you something useful about your security?
Gary: Fuzzing is a very interesting technology. You may not recall I wrote this tome on software engineering back in 1998 called Software Fault Injection -- it was all about providing some inputs and tweaking the input, then having observable conditions in your code, and seeing what happens. Fuzzing is kind of a subset of that. It's easier in some conditions than in others -- for example, it's pretty simple to fuzz the UNIX command line, because of how they're invoked. You can just vary command options and pipe unexpected stuff into the command's input or just send bits. It's also pretty easy to fuzz protocols, especially stateless network protocols of the HTTP variety. What's harder and way, way more interesting is applying fuzzing technology at the APIs of components or the APIs of big classes in your object-oriented code-pile. The thing is it takes some real knowledge to be able to build fuzzing capability at that level because you need to understand what the system will accept and build a sort of grammar to fuzz the API. That is an incredibly powerful technique and it turns out there are many product security organizations in enterprises that use that technique as part of their software security regimen.
Marcus: That reminds me of a wonderful talk at a USENIX back in the 1980s on errors in processors' math libraries. It turned out that the errors predictably come in close to edge-cases-- if you're in a 32-bit architecture you can guess that the mistakes will come around +/- 2^32 and 2/31. It's just like assuming that if you're collecting data from a network connection, you should probably be prepared to handle more than BUFSIZ worth of data in a single line, etc. Knowing where you make your mistakes and knowing how to avoid them is what separates the programmers you want working on your applications from the ones whose keyboards you want to take away.
Gary: In some sense that's related to fuzzing, but what you're really talking about is boundary condition testing and limit testing. If wielded properly, such testing brings an enlightened tester as close to a "security testing guy" as he or she can get.
Marcus: I think you've talked me around about the point of fuzzing, because I was feeling a little bit dismissive of those products when I first saw them.
Gary: Some kinds of fuzzing I share your skepticism about. If your Web application is falling prey to tests that are stupid, then you've got a bigger problem. If we automate a bunch of security tests and we run them against a piece of software and it finds problems, then we know one thing about that software -- it really sucks. That's a great thing to know, if you discover it in time before you ship! The problem is if you treat that same set of canned tests as a "security-meter" and, if you find no results of interest, saying, "Well it must be secure." Then you're crazy. That's why I coined the term "badness-o-meter."
Marcus: What's the current status of your work with The Building Security In Maturity Model (BSIMM)?
Gary: There are a bunch of large corporations in many different verticals that are trying to tackle the software security problem from an institutional perspective. The way they're trying to do that is by creating software security groups that have the authority, responsibility, and budget to solve the software security problem. They're taking a multiyear run at it, and the BSIMM is a study of 33 of those large enterprises' initiatives. We're not trying to make a prescriptive model of software security or a methodology like the Touchpoints, we're just describing what we see -- so there's a big difference between BSIMM and something like Microsoft's Security Development Lifecycle (SDL). The SDL purports to tell you how to do software security -- it's prescriptive. The BSIMM is just a descriptive measurement tool; it says, "Everybody does this -- do you?" It's just about observable facts.
Marcus: The implication, though, is that there's going to be some kind of recommendation. Isn't that what people are going to take away? People will jump from, "Everybody does this" to "Well, these guys are doing this, and their software's pretty good, so maybe that's what we should do, too!"
Gary: Maybe so. A lot of companies, like Microsoft, have learned a lot about doing software security at the enterprise level in the last 10 years, and it's worth seeing who's doing what and providing those data for you to use as you see fit. Sometimes very confused application security "experts" out there say, "Well, you don't really need a software security group, you know," but the BSIMM reveals that though maybe you don't need one, everybody who is doing this seriously has one. It's sort of a pile of facts for you to weigh against your possibly stupid opinions.
Marcus: Let's switch topics to something a bit more consumer-oriented. What about the "app stores" that are proliferating everywhere? I bought an iPad the other day because I like the idea of changing the software installation/purchase lifecycle from "here's a computer with everything preinstalled" to picking and choosing (and paying) for the code I want and having it more or less automatically maintained. It seems like a potentially big win with the "walled garden" model but there's a great looming question about keeping malware out of the walled garden. That seems to be a serious software security issue as well, no?
Gary: The problem with many of the app stores nowadays is that they do relatively little to identify "who wrote that stuff" and whether they were supposed to write it or if it's malicious. There's very little testing going on. In fact, I've heard some stories recently, including this one: There was an app in the Google Android app store that claimed to be a "Bank of America" online banking app, and it was not even written by or distributed by Bank of America. Of course, it still asked for your credentials…!
Marcus: I guess it also raises the issue of your software supply chain. Some of these apps are being contracted out by companies and aren't being developed in-house. So, I suppose you've got the potential that a business could push an app into an app store under its own name, and later discover that they had fed malware to all their customers. It seems to me that app stores are pushing some businesses into being software publishers and they haven't yet realized that. There's a big difference between having a website with possible security problems, and pushing possibly insecure or malicious code to your entire customer base.
Gary: I don't think this notion of "little apps in an app store" is going to miraculously solve the software quality problem or the software security problem.
Marcus: Darn. Isn't there some chance that the app store will be able to disable or revoke software that's determined to be bad? Perhaps there's some 'safety in numbers' we can still take advantage of.
Gary: I think in return for whatever slender advantage you might get that way, you're giving up a great deal of freedom to run what you like. I feel like the iPad device is a sort of castrated computer: It's good for displaying content, but it's not full-featured enough if you want to create content. So, what you will find emerging is the people who these lightweight systems appeal to tend to be consumers and/or powerpoint watching executives rather than content creators. That has implications for security, as well, since a creator is a more serious target than a consumer.
Marcus: Gary, as always, a pleasure!