Feature

Face-Off: Is vulnerability research ethical?

Ezine

This article can also be found in the Premium Editorial Download "Information Security magazine: Seven questions to ask before committing to SaaS."

Download it now to read this article plus other related content.

COUNTERPOINT by Marcus Ranum

One of the vulnerabilities that was exploited by the Morris worm was a buffer overflow in BSD fingerd(8). Bruce argues that searching out vulnerabilities and exposing them is going to help improve the quality of software, but it obviously has not--the last 20 years of software development (don't call it "engineering," please!) absolutely refutes this position.

Not only do we still have buffer overflows, I think it's safe to say there has not been a single category of vulnerabilities definitively eradicated. That's where proponents of vulnerability "research" make a basic mistake: if you want to improve things, you need to search for cures against categories of problems, not individual instances. In general, the state of vulnerability "research" has remained stuck at "look for one more bug in an important piece of software so I can collect my 15 seconds of fame, a 'thank you' note extorted from a vendor, and my cash bounty from the vulnerability market." That's not "research," that's just plain "search."

The economics of the vulnerability game don't include "making software better" as one of the options. They do, however, include "making software more expensive." When I started in the software business, 10 percent annual maintenance was considered egregious, but now companies are

    Requires Free Membership to View

demanding 20 percent and sometimes 25 percent.

Why?

The vulnerability game has given vendors a fantastic new way to lock in customers--if you stop buying maintenance and get off the upgrade hamster wheel, you're guaranteed to get reamed by some hack-robot within six months of your software getting out of date.

One place where Bruce and I agree is on the theory that you need to think in terms of failure modes in order to build something failure-resistant. Or, as Bruce puts it, "think like an attacker." But, really, it's just a matter of understanding failure modes--whether it's an error from a hacking attempt or just a fumble-fingered user, software needs to be able to do the correct thing. That's Program-ming 101: check inputs, fail safely, don't expect the user to read the manual, etc.

But we don't need thousands of people who know how to think like bad guys--we need dozens of them at most. New categories of errors don't come along very often--the last big one I remember was Paul Kocher's paper on CPU/timing attacks against public-key exponents. Once he published that, in 1996, the cryptography community added that category of problem to its list of things to worry about and moved on. Why is it that software development doesn't react similarly? Rather than trying to solve, for example, buffer overflows as a category of problem, we've got software giants like Microsoft spending millions trying to track down individual buffer overflows in code to eradicate them.

The biggest mistake people make about the vulnerability game is falling for the ideology that "exposing the problem will help." I can prove to you how wrong that is, simply by pointing to Web 2.0 as an example.

Has what we've learned about writing software the last 20 years been expressed in the design of Web 2.0? Of course not! It can't even be said to have a "design." If showing people what vulnerabilities can do were going to somehow encourage software developers to be more careful about programming, Web 2.0 would not be happening.

Trust model? What's that? The so-called vulnerability "researchers" are already sharpening their knives for the coming feast. If we were really interested in making software more secure, we'd be trying to get the software development environments to facilitate the development of safer code--fix entire categories of bugs at the point of maximum leverage.

If Bruce's argument is that vulnerability "research" helps teach us how to make better software, it would carry some weight if software were getting better rather than more expensive and complex. In fact, the latter is happening--and it scares me.

Coming in July/August: Chinese Cyber-Attacks: Myth or Menace?
Send comments on this column to feedback@infosecuritymag.com.

This was first published in May 2008

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: