Face-Off: Is vulnerability research ethical?

Bruce Schneier and Marcus Ranum debate the ethics of vulnerability research

This article can also be found in the Premium Editorial Download: Information Security magazine: Seven questions to ask before committing to SaaS:

Security Experts Bruce Schneier & Marcus Ranum
Offer Their Opposing Points of View

Coming in July/August: Chinese Cyber-Attacks: Myth or Menace?
Send comments on this column to feedback@infosecuritymag.com.


POINT by Bruce Schneier

The standard way to take control of someone else's computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it's still how most modern malware works.

Vulnerabilities are software mistakes--mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don't get patched, so the Internet is filled with known, exploitable vulnerabilities.

New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?

Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent--or protect against--those failures. Most software vulnerabilities don't ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.

People without the mindset sometimes think they can design security products, but they can't. And you see the results all over society--in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of "security" on their teams, but it wasn't someone who thought like an attacker.

This mindset is difficult to teach, and may be something you're born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities--again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others' algorithms and protocols. Good software security experts find vulnerabilities in others' code. Good airport security designers figure out new ways to subvert airport security. And so on.

This is so important that when someone shows me a security design by someone I don't know, my first question is, "What has the designer broken?" Anyone can design a security system that he cannot break. So when someone announces, "Here's my security system, and I can't break it," your first reaction should be, "Who are you?" If he's someone who has broken dozens of similar systems, his system is worth looking at. If he's never broken anything, the chance is zero that it will be any good.

Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible--and more and less legal--ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn't whether it's ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.

COUNTERPOINT by Marcus Ranum

One of the vulnerabilities that was exploited by the Morris worm was a buffer overflow in BSD fingerd(8). Bruce argues that searching out vulnerabilities and exposing them is going to help improve the quality of software, but it obviously has not--the last 20 years of software development (don't call it "engineering," please!) absolutely refutes this position.

Not only do we still have buffer overflows, I think it's safe to say there has not been a single category of vulnerabilities definitively eradicated. That's where proponents of vulnerability "research" make a basic mistake: if you want to improve things, you need to search for cures against categories of problems, not individual instances. In general, the state of vulnerability "research" has remained stuck at "look for one more bug in an important piece of software so I can collect my 15 seconds of fame, a 'thank you' note extorted from a vendor, and my cash bounty from the vulnerability market." That's not "research," that's just plain "search."

The economics of the vulnerability game don't include "making software better" as one of the options. They do, however, include "making software more expensive." When I started in the software business, 10 percent annual maintenance was considered egregious, but now companies are demanding 20 percent and sometimes 25 percent.

Why?

The vulnerability game has given vendors a fantastic new way to lock in customers--if you stop buying maintenance and get off the upgrade hamster wheel, you're guaranteed to get reamed by some hack-robot within six months of your software getting out of date.

One place where Bruce and I agree is on the theory that you need to think in terms of failure modes in order to build something failure-resistant. Or, as Bruce puts it, "think like an attacker." But, really, it's just a matter of understanding failure modes--whether it's an error from a hacking attempt or just a fumble-fingered user, software needs to be able to do the correct thing. That's Program-ming 101: check inputs, fail safely, don't expect the user to read the manual, etc.

But we don't need thousands of people who know how to think like bad guys--we need dozens of them at most. New categories of errors don't come along very often--the last big one I remember was Paul Kocher's paper on CPU/timing attacks against public-key exponents. Once he published that, in 1996, the cryptography community added that category of problem to its list of things to worry about and moved on. Why is it that software development doesn't react similarly? Rather than trying to solve, for example, buffer overflows as a category of problem, we've got software giants like Microsoft spending millions trying to track down individual buffer overflows in code to eradicate them.

The biggest mistake people make about the vulnerability game is falling for the ideology that "exposing the problem will help." I can prove to you how wrong that is, simply by pointing to Web 2.0 as an example.

Has what we've learned about writing software the last 20 years been expressed in the design of Web 2.0? Of course not! It can't even be said to have a "design." If showing people what vulnerabilities can do were going to somehow encourage software developers to be more careful about programming, Web 2.0 would not be happening.

Trust model? What's that? The so-called vulnerability "researchers" are already sharpening their knives for the coming feast. If we were really interested in making software more secure, we'd be trying to get the software development environments to facilitate the development of safer code--fix entire categories of bugs at the point of maximum leverage.

If Bruce's argument is that vulnerability "research" helps teach us how to make better software, it would carry some weight if software were getting better rather than more expensive and complex. In fact, the latter is happening--and it scares me.

Coming in July/August: Chinese Cyber-Attacks: Myth or Menace?
Send comments on this column to feedback@infosecuritymag.com.

This was first published in May 2008

Dig deeper on Emerging Information Security Threats

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close