vulnerability disclosure

Vulnerability disclosure is the practice of reporting security flaws in computer software or hardware.

Vulnerabilities may be disclosed directly to the parties responsible for the flawed systems by security researchers or by other involved parties, including in-house developers as well as third party developers who work with the vulnerable systems. Typically, vendors or developers wait until a patch or other mitigation is available before making the vulnerability public.

Vulnerability disclosure issues

Vulnerability disclosure and how it is performed can be a contentious issue because vendors prefer to keep the vulnerability under wraps until they have a patch ready to distribute to users. Conversely, researchers and security professionals, as well as enterprises whose data or systems may be at risk, prefer that disclosures be made public sooner.

When it comes to vulnerability disclosure, there are several different groups of stakeholders, each of which has different priorities. First, there are the vendors, developers or manufacturers of the vulnerable systems or services who would prefer that vulnerabilities be disclosed only to the themselves, and made public only after the patches are introduced.

Users of the vulnerable products or services form another group of stakeholders; they prefer the systems they use are patched as quickly as possible. However, if a vulnerability cannot be patched before attackers begin exploiting it, disclosure is preferred as long as there are other ways to mitigate or eliminate the threat.

Finally, there are the security researchers who uncover the vulnerabilities. In general, their preference is that vulnerabilities be fixed quickly so they may publish details of the vulnerabilities they discovered.

Types of vulnerability disclosure

Responsible disclosure is one approach that numerous vendors and researchers have used for many years. Under a responsible disclosure protocol, researchers tell the system providers about the vulnerability and provide vendors with reasonable timelines to investigate and fix them and then publicly disclose vulnerabilities once they've been patched. Typical responsible disclosure guidelines allow vendors from 60 to 120 days to patch a vulnerability. In many cases, vendors negotiate with researchers to modify the schedule to allow for more time to fix difficult flaws.

In 2010, Microsoft attempted to reshape the disclosure landscape by introducing a new concept of coordinated disclosure, also referred to as coordinated vulnerability disclosure (CVD), under which researchers and vendors work together to identify and fix the vulnerabilities and negotiate a mutually agreeable amount of time for patching the product and informing the public.

While high-profile vulnerability disclosures often involve a vendor or developer responsible for the vulnerable product and research teams responsible for discovering the vulnerability, there are other disclosure options for vulnerabilities:

  • Self-disclosure: Occurs when the manufacturers of products with vulnerabilities discover the flaws and make them public, usually simultaneously with publishing patches or other fixes.
  • Third-party disclosure: Occurs when the parties reporting the vulnerabilities are not the owners, authors or rights holders of the hardware, software or systems. Third-party disclosures are usually made by security researchers who inform the manufacturers, but third-party disclosure may also involve a responsible organization such as the U.S. Computer Emergency Readiness Team (CERT) at Carnegie Mellon University in Pittsburgh.
  • Vendor disclosure: Occurs when researchers report vulnerabilities only to the application vendors, which then work to develop patches.
  • Full disclosure: Occurs when a vulnerability is released in full publicly, often as soon as the details of the vulnerability are known.

Vulnerability disclosure policy and guidelines

A vulnerability disclosure policy (VDP) is aimed at providing straightforward guidelines for submitting security vulnerabilities to organizations. A VDP offers a way for people to report vulnerabilities in a company's products or services.

A VDP should contain the following components, according to the National Telecommunications and Information Administration:

  • Brand Promise:  Allows a company to demonstrate its commitment to security to customers and others potentially affected by a vulnerability by assuring users and the public that safety and security is important. The company describes what work it has done related to vulnerabilities, as well as what it expects to do going forward.
  • Initial Program and Scope: Indicates which systems and capabilities are fair game and which are off limits to the people and groups that find and report new vulnerabilities. For example, a company may encourage submissions for all sites it owns but explicitly exclude any customer websites hosted on its infrastructure.
  • "We Will Take No Legal Action If":  Where a company informs researchers about the activities or actions they take and whether they will or won't result in legal action.
  • Communication Mechanisms and Process: Allows a company to clearly identify how researchers should submit their vulnerability reports (e.g., secure web form or email).
  • Nonbinding Submission Preferences and Prioritizations: Sets expectations for preferences and priorities about how a company will evaluate reports. It also lets researchers know which types of issues are considered important. Typically, an organization's support and engineering team maintains this dynamic document.

In their VDPs, companies can also let finders know when they can publicly talk about vulnerabilities. For example, an organization may state that a finder cannot publicly disclose the vulnerability:

  • until it's fixed
  • until a certain length of time has passed since a report was first submitted
  • until the finder has given the organization X days of notice
  • except on a mutually agreed-upon (or negotiated) timeline that may be modified as part of the process with the disclosing party

Vulnerabilities reported to Carnegie Mellon University Software Engineering Institute's CERT are forwarded to the affected vendors "as soon as practical after we receive the report."

Currently, security researchers don't agree on exactly what constitutes "a reasonable amount of time" to allow a vendor to patch a vulnerability before full public disclosure. Most industry vendors generally agree that a 90-day deadline is acceptable. In 2010 Google recommended a 60-day deadline to fix a vulnerability before full public disclosure, seven days for critical security vulnerabilities, and fewer than seven days for critical vulnerabilities that are being actively exploited. However, in 2015 Google extended that deadline to 90 days for its Project Zero program.

Disclosure deadlines can vary among vendors, researchers and other organizations. Vulnerabilities reported to the CERT Coordination Center are disclosed to the public 45 days after they're first reported, whether or not the affected vendors have issued patches or workarounds.

Extenuating circumstances such as "active exploitation, threats of an especially serious (or trivial) nature or situations that require changes to an established standard" can affect CERT's deadlines. The coordination center may make an open disclosure of a software vulnerability before or after the 45-day time frame in some cases.

Vulnerability disclosure process

Although there's no formal industry standard when it comes to reporting vulnerabilities, disclosures typically follow the same basic steps:

  • A researcher discovers a security vulnerability and determines its potential impact. The finder then documents the vulnerability's location via pieces of code or screenshots.
  • The researcher develops a vulnerability advisory report detailing the vulnerability and including supporting evidence as well as a full disclosure timeline. The researcher then securely submits this report to the vendor.
  • The researcher usually allows the vendor a reasonable amount of time to investigate and patch the vulnerability according to the advisory full disclosure timeline.
  • Once a patch is available or the timeline for disclosure -- and any extensions -- has elapsed, the researcher publishes a full disclosure analysis of the exploit, including a detailed explanation of the vulnerability, its impact as well as the resolution.

Branded vulnerabilities

Recently, security researchers have increasingly begun to brand their vulnerability disclosures, creating catchy vulnerability names, dedicated websites and social media accounts with information about the vulnerabilities often including academic papers describing the vulnerabilities and even custom designed logos.

Some prominent branded vulnerabilities of recent years include "ImageTragick," the name applied to a set of vulnerabilities in the open source ImageMagick library for processing images; "Badlock," a flaw that affected almost all versions of Windows; "HTTPoxy," a set of vulnerabilities in applications that use HTTP proxy and the "KRACK" attack on WPA2 authentication over Wi-Fi.

The information security community tends to be split on whether such efforts are appropriate. Some researchers who promote branded vulnerabilities are seen as attempting to hype their own research, whether or not the vulnerabilities are actually serious; others take issue with branding when a well-supported public relations effort in support of a vulnerability distracts the public from other vulnerabilities that have been made public without extensive publicity campaigns.

This was last updated in October 2017

Continue Reading About vulnerability disclosure

Dig Deeper on Information security laws, investigations and ethics