McGraw on Heartbleed shock and awe: What are the real lessons?

Secure software development expert Gary McGraw said the main lesson of Heartbleed is to control open source risk.

Co-authored with Aaron Bedra

The OpenSSL bug called Heartbleed (CVE-2014-0160 and applicable to OpenSSL versions 1.0.1 through 1.0.1f inclusive) hit the scene just like any other security bug with "the full monty" of hype. But Heartbleed is different (not because of its technical trickiness, but because of its security implications). This one actually warrants the hype. If you haven't considered Heartbleed's implications to both your firm's and your own Web use, patched up your servers, revoked your certificates and conjured new ones, changed your passwords all over the Web, and alerted your friends and colleagues as well, do it now.

McGraw logo

In this short article, we want to consider the bug from a "lessons learned" perspective. Instead of revisiting what the bug is about (very short answer: It's a fairly mundane coding error) or how a bug like this could be uncovered and fixed or otherwise avoided during design and development, we will focus on what should be done about the rest of the supremely awful OpenSSL code, and how similar open source projects can take advantage of the software security activities and practices described in the Building Security in Maturity Model (BSIMM).

The Heartbleed bug itself is trivial

We're going to skip talking about the particulars of the bug since there are plenty of great technical sources available. For an overview and introduction, you should start with the basics at the Heartbleed website. Also see Matthew Green's Heartbleed-related blog post. A particularly good treatment of the bug and methods that can be used to find it (by James Kupsch and Bart Miller) is found here.

Expecting open source projects to find volunteer security analysts to involve is folly.

Suffice it to say that technology does exist to find bugs like Heartbleed (an out-of-bounds memory read). In principle, this bug should be handled at compile time (or perhaps even earlier) by a static analysis tool like Coverity, HP/Fortify, Cigital SecureAssist or IBM/Appscan Source. But when the code is a big mess and suffers from "code by committee"-style problems like OpenSSL does, it makes the job of static analysis much less effective than it otherwise should be. In fact, OpenSSL had already been put through the Coverity wringer several times as part of Coverity's admirable Coverity Scan initiative before the Heartbleed bug came to light.

A particularly useful write-up from Coverity's perspective can be found here.

OpenSSL, code globs, platforms and security risk

When you step back and think about the OpenSSL codebase from a "fix" perspective, the first thing that stands out is the excessive baggage (simply put, the codebase is a big mess). This mess points firmly in the direction of a complete rewrite. But a rewrite is radical and we must consider the options carefully. Of course, a rewrite could bring with it a chance to right the wrongs of the current codebase and pave the way for better and more robust crypto systems in the future.

None of this comes easy or cheap, nor does it come instantaneously. In the meantime, we need to fix what we have until a rewrite becomes a reality. In order to do this, we need to improve our tools and cut some of the dead weight that OpenSSL is currently carrying.

As Kupsch and Miller detail in their article, there are some obstacles to finding bugs like Heartbleed with current approaches. Before we get into tooling and the design process, let's take a minute to explore some of the more obvious points. While it's certainly true that there are a lot of diverse systems available today, the goal of OpenSSL to support all of them equally will remain a constant source of issues until support is pared back just to today's modern platforms. The sheer complexity of supporting so many platforms brings along a laundry list of issues. Creating proper abstractions for memory management and keeping performance in check across too many platforms leads to bugs just like Heartbleed. Although the issue is obvious in hindsight, a simple call to calloc over malloc could have made this bug simply disappear. Since malloc/calloc is not the same (or even properly supported) across the 80-some platforms supported by OpenSSL, however, the upshot is that solving what would normally be a trivial issue becomes death by 80-some cuts.

As Kupsch and Miller point out, even simple bugs like this one are hard to spot with traditional software assurance tools (especially if the code is a mess). Additional tooling can support the effort, but only if some of the darker corners of the C programming language are avoided in the process. For what it's worth, clever C hacks involving memory management never sit nicely in the minds of security-focused reviewers. A more dedicated approach to engineering and testing needs to follow along with the evolution and improvement of assurance tools. For example, aggressive use of dynamic analysis tools goes a long way. Traditional fuzz testing yields some impressive results and has been adopted by some pretty large and popular programs (see for example this Chromium blog entry). In addition to fuzz testing, property- or generative-based testing is also useful. The QuickCheck model has been used to find complicated and long-lived bugs in complex software and can be adapted to work against C programs.

Then there is complete refactoring or rewriting. It turns out that the rewrite path is already being explored by the libreSSL project. Despite the site's liberal use of Comic Sans and the <blink> tag, this is a new and much needed effort. There's even a bit of humor to be found in the effort. As the team kills support for numerous legacy platforms, they are doing a top-notch job of improving the code and making it cleaner and more readable. Of course efforts like these alone won't prevent the next Heartbleed type of attack from happening.

Ultimately, there is a basic economics problem here. Software security is non-trivial, and people get paid lots of money to do it properly. Expecting open source projects to find volunteer security analysts to involve is folly. Open source needs to address the economics of security analysis head on, paying security professionals for their time and expertise.

Lessons learned, AKA stuff we already knew

Software security has come a long way in the last decade. Though progress has not spread to all eight million (or so) developers on the planet, we have managed to impact the work of 272,358 of them directly through the BSIMM community. (We are 1/29th of the way done.)

There are a number of obvious lessons to be learned from the Heartbleed bug. (Well, obvious to software security people, anyway.) Most of these amount to things we already knew:

  1. C is a terrible language (and just for the record, C++ is worse). Pointers, complex non-type safe use of memory and data structures, and execution paths that can foil even the smartest developer (or group of developers), are just three specific bad things about C. C is like assembly language on steroids. Sure, C can be used by masters to create super tight, very efficient code for embedded systems and kernels, but in the hands of the masses, C is a menace. Perhaps there should be some kind of license requirement to use C, but we'll leave that idea for another article. Meanwhile, use a type-safe language; there are plenty of them out there.
  2. Static analysis tools can't find stuff if the code is a big (control flow) mess. Complexity -- which makes up one third of the Trinity of Trouble -- is the friend of the attacker and the enemy of the builder. If you can't understand some code that you are looking over, chances are it is overly complex. FWIW, if you choose to use C, you have to work hard to avoid complexity. Static analysis has come a long way in the last decade, but it can't overcome terrible code. Make your code as simple as possible.
  3. Code that you did not write yourself can put you at serious risk. Oh, and guess what -- most of the software you rely on every day, in both your personal life and at work, was written by somebody else. It has gotten to the point that vendor control (of software providers) is a critical aspect of controlling software risk. Note that the major banks are spending millions of dollars getting a handle on their software vendors. If your developers are doing a great job with software security, that is fantastic, but how are your software vendors doing? Ask your vendors explicitly about software security. (BTW, this goes for hardware providers as well: Guess how many routers have OpenSSL-- and the Heartbleed bug -- baked right into their chips?)
  4. Computer security problems are almost always caused by bad software. We've talked about the firewalls, fairy dust and forensics failure before (and will no doubt talk about it again). It is high time to devote more resources to the software security problem. Your firm and your firm's software vendors should be busy implementing a software security initiative. Implement a software security initiative. Really.
  5. Open source is not more secure. Nuff said. Many firms tightly control the use of open source to attain some control over software risk. Some ban open source entirely. Just know that open source is not magically secure. Manage your open source risk.

Is there hope for open source?

Point five above is worth a few more bits. By now, anyone who believes in the fallacious "many eyeballs" argument about security and open source is either a religious open source zealot or a nutcase. FWIW, McGraw wrote about this with Viega long ago in Chapter 4 of Building Secure Software (long ago, that's right -- 2001). The problem is, though, we know much more about software security and integration of numerous security touch points into the software development lifecycle than we did back in 2001; open source projects do not appear to be taking advantage of what we've learned.

The data and the ideas in the BSIMM are free. We published them under the creative commons. We welcome contact from open source project leads who want to leverage the lessons of the BSIMM. Though the current scattershot approach to securing open source has failed spectacularly, there is plenty of hope for more secure open source. But it's time to approach software security the same way serious corporations do.

This was first published in April 2014
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close