Security and IT managers should use the New Year to make a difference in the way vulnerability information is made
available to the bad guys.
A set of guidelines for vulnerability reporting released by the Organization for Internet Safety (OIS) recommends a 30-day grace period be observed between when a patch becomes available and technical details of the vulnerability are released. By understanding the policies that vendors and security researchers use to disclose vulnerability information, managers can make decisions about whom to do business with based on whether they follow this advice.
If customers insist that software vendors follow the guidelines and refuse to employ researchers who persist in releasing full technical details of vulnerabilities before fixes are available, we can start to turn the tide.
It used to be that when someone found a security flaw in a piece of software, reports to the vendor requesting a fix were often greeted with threats of lawsuits. Happily, those days are largely behind us, due in large part to the public release of vulnerability information. Pressure from the vendors' customers, and wariness of bad public relations, has driven software companies to take the security of their software seriously -- not to mention the desire to use good security as a competitive differentiator.
These days, software vendors that intentionally ignore vulnerability reports or threaten backlash to researchers are few and far between -- though it does still happen. Full disclosure of vulnerability information succeeded in demonstrating that security flaws in software couldn't be ignored. Unfortunately, wide dissemination of the technical details of using vulnerabilities has also succeeded in making their widespread exploitation nearly trivial. Code that demonstrates vulnerability is typically very easy to transform into a tool a malicious person can use to break into systems.
It's no longer necessary, in most cases, to wield the club of full, immediate disclosure to motivate software vendors to fix flaws in their software. Rather, we suggest that delaying the full disclosure of technical detail doesn't hurt the interests of those with legitimate need for the information, but denies aid to those who seek merely to exploit the problem.
For example, scientific research on software engineering that seeks to analyze large bodies of vulnerability information over time must have full technical information about vulnerabilities, but has no need for that information in the first thirty days it's available. Similarly, a system cracker looking for new ways to break into systems seeks to have information about vulnerabilities that no one else has, or at least that others haven't had a chance to react to. For the system cracker, so-called zero-day (pre-disclosure) vulnerability information is the best, but few have an opportunity to install the patch on the first day it's released, so many systems could still be broken into in the first few days. Technical details about the vulnerability help system crackers develop exploits more quickly.
In 2004, the security community has an opportunity to try a different approach. The OIS guidelines are a starting point for changing the way we think about the requirements of disclosing vulnerability information.
About the author
Scott Blake, CISM, CISSP, is VP of information security at BindView. Opinions expressed in this article are those of the Organization for Internet Safety.