Vladislav Kochelaevs - Fotolia

Does your system design eliminate the top 10 software security flaws?

Marcus Ranum chats with Gary McGraw about secure system design and the IEEE Computer Center for Secure Design’s top 10 list of what to avoid.

Software security vulnerabilities run the gamut from undetected bugs to serious design flaws in authentication, authorization, data integrity and non-repudiation requirements. The IEEE Computer Center for Secure Design tapped Gary McGraw and other leading security practitioners to participate in a workshop that identified some of the common mistakes found in system design, at their companies and other organizations.

A longtime proponent of secure software development, McGraw is chief technology officer of Cigital, and the co-author of the Building Security In Maturity Model project. The BSIMM framework offers research data and guidance to help organizations develop best practices based on collective data on techniques and policies that have improved development at more than 70 software companies.

A familiar figure in the security industry—and among our readers as the passionate voice behind the Software [In]Security column—McGraw has a Ph.D. in Computer Science and Cognitive Science from Indiana University. He has authored 11 books on software security and sat on the technical advisory boards of several companies including Fortify Software (acquired by HP) and Dasient (acquired by Twitter).

McGraw has a long affiliation with the IEEE. He is the host of the Silver Bullet Security podcast (syndicated by SearchSecurity) and a former member of IEEE Security and Privacy Task Force and the IEEE Computer Society Board of Governors. We are thankful he was able to take time out of his busy schedule to share his take on secure design with Marcus Ranum (who was also a member of Fortify Software’s board), and discuss the thinking behind the IEEE’s Center for Security Design initiative.

I’ve been keeping distant tabs on what you’re up to, Gary. Tell me about the launch of the IEEE Computer Center for Secure Design.

Gary McGrawGary McGraw

Gary McGraw: It’s pretty cool—we just turned on the website the other day. This is a group of firms that got together in April and did a workshop. The price of admission was that you had to bring an actual bag of design flaws from your software, and dump them on the table. We went through them all, identified 10 of the most common, and then, we wrote a report about it. Some of the firms that participated include Twitter, Google, EMC and its RSA division, Intel (McAfee) and Cisco Systems. There were also a couple of academics.

So, these top 10 design flaws, what were they? Let me guess, one of the biggest is code-paths that bypass authentication?

McGraw: Well, to you they’re going to sound incredibly dumb and obvious—and they are—but the thing that was most striking is that they were found in everyone’s software during analysis. If you do threat modeling and risk analysis, they break into common themes.

The authentication bypass example is a little bit too specific. We didn’t write them in order of 1 to 10, we just identified things that are about equally prevalent. Here are some examples: broken authentication mechanism (that could be authentication bypass), failure to authorize after authentication, not explicitly validating all data or understanding how integrated external components change the attack surface.

So you got everything on the table and then what?

We want to focus some attention on design and help architects do a better job of thinking about security when they’re doing their part of the software development lifecycle.
Gary McGrawCigital CTO

McGraw: We didn’t want to just put out the flaws, so we decided that we would write some advice about each one and how to avoid it in the design phase. So what we’re trying to do is elevate the discussion in software security from just code and bugs and static analysis and having vendors pretend, ‘We use static analysis so our code is secure.’ We want to focus some attention on design and help architects do a better job of thinking about security when they’re doing their part of the software development lifecycle.

It is somewhat ambitious, but I’ll tell you the secret story behind it all is IEEE came to me and said, ‘We’re thinking about doing some cybersecurity initiative.’ And I said, ‘Run away! HIDE! Can’t do it…not interested!’ And in the middle of that discussion I said, ‘You know what? There’s this hard problem that we’re not making much progress on and if IEEE is willing to sign up to fund and support the effort, we should do something around security design.’ So I pitched that idea to them, and they were totally behind it.

That’s pretty good! So how do you go from ‘Here’s this particular category of detailed problem’ into practical advice for a systems architect?

McGraw: That’s the hard question. Right now the level of descriptive detail is similar to that used by software architects at a whiteboard level: Give me a magic marker and let me show you what you need to think of here, and here and there. The thing that we want to do is push some of this advice down to design patterns, and to actually build code that might be imposed in certain domains, like medical devices. So you’ll have standardized designs coming from the market that help guide you to be more secure.

Our intention is to make these [standardized designs] as concrete as possible, and there’s already evidence that works. Neil Daswani from Twitter participated [in the workshop]. He took the list of the 10 flaws back to Twitter and created a ‘Twitterized’ version that had real examples of the flaws. More importantly, [it had] advice about what framework to use, and how to use that framework inside Twitter, to avoid the flaws. That got pushed out to all of its engineering already.

That gets around the problem that I could see coming, where we have different codebases. If I have a codebase that’s all in C and you’ve got a codebase that’s in a different language, it’s not going to help unless I can tie it to how my overall architecture deals with whatever dependent libraries and APIs I am using in my systems.

McGraw: That’s right, although the advice we’ve published is kind of language and tech-stack independent. We do want to get more specific. This was just our first workshop, and we had an objective that we were going to produce something that we could publish. There’s plenty of work to be done.

The other thing that’s cool is that we got IEEE to publish this under the Creative Commons [license] so it’s free, and it’s available and anyone can use it any way they want.

So you’ve announced this, are you getting any traction with it?

McGraw: Yes, we’re getting a huge amount of traction—the tech press is all over it, and we had some major newspaper [reporters] present at the launch, though it’s not clear how it fits into their coverage. Generally speaking, we’re just trying to reach software architects. Too much of the discussion in software security is about FUD (fear, uncertainty and doubt) and fantastic exploits and we need to elevate to conversation above that.

I completely agree with you.

McGraw: We’ve talked about one possible way to improve things, there are other possibilities. I’ve alluded to this notion of building code. A November workshop focused on security architectures for medical devices. Maybe, they’ll come up with something more specific that’s tailored to the problem domain. When you’re putting up a building you’ve got certain constraints you have to comply with—aspects of blueprints that have to follow code. The idea is to establish domain-specific design constraints that describe the right way to architect certain properties of those systems.

The industry seems to be shifting away from more exhaustive design and toward things like agile and rapid development (going from a broad design to a prototype implementation by repurposing code modules). I see that as a step away from design in favor of a ‘throw stuff over the fence and see if it works’ model. How are we going to deal with that?

McGraw: It’s going to be difficult for security design, in particular, but I think we’re making some very good headway toward agile tools that are useful in a sprint by embedding some of what we’ve learned through static analysis directly into the development environment. So when you’re typing something into the system it might tell you, ‘Hey, dingbat, I told you not to do it that way. Here, do it this way.’ That can avoid a large bunch of bugs very quickly. As we move toward the developer’s keyboard we can make good headway and that’s about the best we can do for agile.

On the design front, if the code is the design—if people really believe that, we’re in a world of hurt. But I think most agile shops do have design phases and that’s where we need to be. Every so often, there needs to be an architectural risk analysis exercise.

I was talking with someone last year and he said, ‘We do agile.’ When I asked him what that meant, he replied, ‘I tell the head of development what I want the product to do, and he makes it happen.’ I said, ‘I hope you remembered to tell him that the stuff should be reliable and secure.’ And he said, ‘Oh, yeah, I did… so, no problem.’

McGraw: It’s good they had that covered! Just sprinkle on some cryptography and it’ll be OK.

I do think it’s important that we put some pressure on the vendors in software security to get them to admit that this is not a simple problem that’s going to be solved easily with the application of a tool. Software security is about 10% of the computer security market right now, which is a real thing. And in domains like banking where they’re really working it, I think they have a pretty good handle on their software pile and process, to a much greater degree than even five years ago. Many of the vendors aren’t perfect, but they have changed the way they produce software for the better.

The place where we have plenty of room for improvement is retail, medical systems, healthcare, gaming and hospitality. There’s also electric grid and critical infrastructure. There’s plenty of work to be done. The good thing is that we know what work needs to be done and if we approach it right, we’ll be OK. If we try to scale software security by saying ‘run the magic wand over it, and it’ll all be alright,’ we could hit a brick wall. But I don’t think we’ve hit it yet. Design is the key.

About the author:
Marcus J. Ranum, chief security officer of Tenable Security Inc., is a world-renowned expert on security system design and implementation. He is the inventor of the first commercial bastion host firewall.

This was last published in November 2014

Dig Deeper on Secure software development