This content is part of the Essential Guide: How to craft an application security strategy that's airtight

McGraw: Seven myths of software security best practices

According to expert Gary McGraw, you're not helping yourself by believing the things -- all seven of them -- you've heard about secure software development.

Secure software development has made great strides in the last decade or two, and I am optimistic about the future. In the 15 years since my book, Building Secure Software, was published, we've seen a shelf full of new books appear, a number of powerful review technologies developed and deployed worldwide -- including static analysis for code review -- and an industry measurement tool arise (see the Building Security in Maturity Model [BSIMM]). And yet, there are still many firms not taking on software security, either at all or properly. Why not?

The seven software security myths presented here represent common misconceptions about software security best practices. Ultimately, they are about how software security initiatives writ large work, or rather should work -- and are not simply about how to secure one particular application.

I will introduce each myth below and show you what to do about it. Throughout the discussion, I'll demonstrate why a software security initiative (SSI) is about building security into software as it is created throughout the software development lifecycle. The myths are ranked from completely ridiculous at the top of the list to fairly subtle at the bottom.

Myth 1: Perimeter security can secure your applications

In the beginning, there was perimeter security focused primarily on network security. The notion was simple: Protect the broken stuff from the bad people by putting a barrier between the two. That's where the firewall came from. It was designed to protect your internal network from the big bad Internet by selectively stopping network traffic moving from one to the other. Today, perimeter security relies on a combination of firewalls, Web application firewalls (WAF), security information and event management (SIEM) products, and products that somehow monitor the operating environment in real time.

Together, all of these systems are worth investing in, but they don't begin to scratch the real problem of insecure software. That's because at their very best, they do what they set out to do: Protect the broken stuff from the bad people, with a device placed at the perimeter.

The real question is: "Why is the stuff we're protecting broken in the first place?"  Instead of securing broken applications against attack, we should strive to build applications that are not broken.

Software security is about building security into your software as it is being developed. That means arming developers with tools and training, reviewing software architecture for flaws, checking code for bugs, and performing some real security testing before release, among other things.

Don't get the wrong idea here and throw the firewall out with the bathwater.  Firewalls are still useful, and you should definitely deploy them. Just don't believe for a minute that they solve the software security problem. They don't.

By the way, the biggest challenge in security over the past two decades has been the dissolution of the perimeter. Massively distributed applications that take advantage of the efficiency of the cloud do all they can to eradicate perimeters.  Without a perimeter, firewalls, WAFs and SIEMs are very difficult to deploy effectively.

The best part about doing software security properly is that it makes your network security gear at the -- disappearing -- perimeter easier to use. Protecting nonbroken stuff from the bad people is a much better position to be in as a network security person than protecting broken stuff.

Myth 2: A tool is all you need for software security

Once the myth of the perimeter has been properly debunked, we can concentrate on the notion of looking for software defects. Fortunately, we have made great strides in the last 15 years building technology to find some kinds of security defects in code -- mostly bugs.

The earliest approach was to build black-box testing tools for simple protocols, such as HTTP. Dynamically scanning a Web app for known problems in a black box fashion is both cheap and desirable. Good luck generalizing to other protocols, though.

Next came the idea of looking at the code itself with a static analysis tool.  Technology for automating code review has improved vastly since the exciting days of ITS4 in 1999. Static analysis capability is now available worldwide through IBM, HP, Cigital and other firms.

Today, there are combination tools that combine dynamic and static analysis in interesting ways -- mostly for HTTP again. That's all good.

What's not so good is the absolutely limited impact that these tools can have.  Black-box security testing only works for Web applications because the HTTP protocol is stateless and simple. Code review tools only look for bugs in code written in certain programming languages. Combination solutions such as IAST and RASP, require experts if they're going to be effective in practice. Bottom line: All tools work by limiting the scope of the problem, often past the point of effectiveness.

Simply put, software security should leverage tools and automation whenever possible, but tools alone do not solve the problem. The old software testing adage applies in security as well as it did in quality assurance: A fool with a tool is still a fool.

Integrate software security testing tools and automation into your SSI for reasons of efficiency and scale. But do not confuse tool use with an SSI. An SSI uses tools to enable efficiency and scale of a good strategy, but does not devolve to only using tools.

Myth 3: Penetration testing solves everything

Security testing is important. Specialized penetration testing at the end of the software development lifecycle is a good thing to do. However, just like a tool can't solve the software security problem by itself, neither can penetration testing.

There are two main reasons for this, both rooted in economics. First and most distressingly, penetration testing is far too often misapplied as follows: Hire some reformed hackers. You know they are reformed because they told you they were reformed (hmm). Give them a set period of time -- say one week -- to perform a pen test. Incidentally, the price for a week like this is under severe commodity pressure these days and is going down. At the end of the testing period, the reformed hackers may have found five bugs. They tell you about four of them. Of the four you hear about, only one is easy to fix, but miraculously, you manage to fix two. The other two -- or is that three? -- must wait. Sound familiar? Don't do penetration testing like that.

The second problem with pen testing is more sophisticated: Problems are more expensive to fix at the end of the lifecycle. Economics dictates finding defects as early as you possibly can. Have a flaw in your idea? Redesign. Have a bug in your code? Find it while it is being typed in.

So, should you pen test? Absolutely. Pen testing is important and necessary. But any kind of "penetrate and patch" mentality is insufficient as a software security approach. It is much more powerful in tandem with training -- partially based on pen testing results -- design review, code review and security testing at the integration level. A well-structured SSI does all of those things and uses pen testing to demonstrate that all those other things generated the expected level of quality.

Myth 4: Software security is a cryptography problem

Preamble: For myth four, we use crypto as an example of a common security feature. But you can really substitute any security feature here: identity management, strong authentication, being PCI-compliant, among others.

Developers and software architects have been trained for years to piece out their work in terms of features and functions. They do this when estimating work by default. So it should not be surprising that software people may think by default that security is a feature or a function. And the most common security feature in a developer's mind is Cryptography.

The idea that you can sprinkle magic crypto fairy dust liberally around your software and it will be secure is wrong on many levels. First of all, security is a system property, not a thing. So adding a thing to your code is unlikely to make it secure. Secondly, cryptography is mind bogglingly hard to get right. Not only is the math difficult, applied cryptography is riddled with massive sneaky pitfalls that are easy to get wrong.

In any case, cryptography is at best a great tool for helping to secure data, communications, code globs and so on, but it is no silver bullet.

Here's why: Crypto can neither find nor eradicate bugs and flaws -- but sometimes it can temporarily obscure them. Crypto can't train your developers. And even superior applied cryptography falls prey to pen testing. As but one example, if I find a SQL injection in your app that talks to an encrypted database, do you think I'll get back encrypted data or plaintext data?

Software security is about integrating security practices into the way you build software, not integrating security features into your code.

Myth 5: Software security is only about finding bugs in your code

Do bugs matter for software security? Heck yes. Implementation bugs in code account for at least half of the software security problem, and finding and fixing bugs is an essential SSI practice. Only half the problem? What does the other half entail then? The other half involves a different kind of software defect at the design level: flaws. For a great set of flaws and how to avoid them, see the work of the IEEE Center for Secure Design.

So you can institute the best code review program on the planet, with the strongest tools known to humanity, but you will be very unlikely to find and fix flaws that way. Flaws are the domain of threat modeling and architecture analysis, and that domain has so far resisted any real practical automation -- it still has to be done by experienced people. And like I said, flaws account for half of the problem.

Of course, this myth has multiple parts, and we already alluded to the second.  Finding bugs is great, and there are many different methods for finding bugs (see myths two and three), but unless you fix what you find, security does not improve. Sadly, this state of finding without fixing is incredibly common. Part of the problem is that technology that was developed to find bugs is often not very helpful at suggesting how to fix them.

Just what sort of bug list should we use anyway? The notion of focusing on only 10 bugs -- say the OWASP Top Ten -- is ridiculous. Static analysis tools find literally thousands of different bugs -- though they don't fix them. Over-focusing on 10 is great folly.

So find bugs, but don't limit your search to only 10. And then fix them. But make sure you also train your developers not to make new bugs every day -- otherwise, you're setting up a nice hamster wheel of pain -- and focus some attention on flaws at the design level. Finally, don't forget the pen testing.

Myth 6: Software security should be solved by developers

Preamble: Myth 6 can apply equally well to "security people" or "compliance people" as it does to "developers." That is, picking only one population to hang this problem on is a problem in both directions.

Who should do software security? We know from the BSIMM that an SSI should be led by a group called the software security group (SSG) -- by the way, structuring an SSG is an art about which we have written before. An SSG is not simply made up of developers. In fact, the notion that all developers -- and only developers -- should collectively and magically be responsible for software security sounds great in theory, but never works in practice. 

So, form an SSG. And make sure the SSG includes both software people with deep development chops, security people with strong architectural kung fu and people who can interact with the business.

It should be obvious from what we've said here that the idea of simply training all of the developers is another one of those "Yes, do that, but not only that" software security best practices. Should you train your developers and arm them with software security know-how? Heck yes. But you also need to check code for bugs, check design for flaws and check systems for security holes. Why? Well, aside from the obvious reasons, your developers don't write all the code you deploy. Even if your code is perfect, all that vendor code and other stuff created in groups without an SSI will trip you up every time.

Myth 7: Only high-risk applications need to be secured

Our last myth is about scale, and it's a very widespread myth that needs more active debunking. Today's application portfolios are often quite large -- thousands of apps -- and getting started back in the day meant identifying those apps that carried the most risk and focusing all of the attention on them.

Those days are over. Risk management can fail, and when it does, it fails hard. Today, smart firms know that they need to cover their entire software portfolio. You can still use risk management to drive level of effort, but in all cases -- even with the most lowly of low-risk applications -- you must make sure the level of effort is never zero. Automation saves the day here by providing cost-effective ways to cover low-risk apps. Even architecture analysis can be considered in this light.

To sum this important myth up, high-risk apps must be secured and low-risk apps, too -- possibly to different extents.

Debunk the seven myths of software security best practices

Clearly, software security is about more than point solutions. No point solution by itself can solve software security. In fact, a successful SSI is about culture change in an organization from top to bottom and back again. For more about the kinds of activities an SSI should undertake, as well as how to go about measuring an SSI, see the BSIMM.

Next Steps

Learn about the BSIMM program.

Learn about building security into the software development lifecycle.

Find lots more software security guidance from expert Gary McGraw.

Dig Deeper on Application and platform security

Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close