The past few months have seen a lot of activity around some really serious Internet-level vulnerabilities, starting with the problems in the DNS system that Dan Kaminsky found, and continuing with the clickjacking attacks from Robert Hansen and Jeremiah Grossman, and the recent news of new DoS attacks on the TCP stack by Robert E. Lee and Jack C. Louis. Each of these problems got a lot of attention in the press and in the vendor and research communities, and rightly so. They’re all serious problems. But they all have one other thing in common: They were all the subjects of so-called partial disclosure efforts. In each case, some details of the vulnerability were released, and then the researchers involved said that the problems were too serious to discuss fully until patches, workarounds or other fixes were available.
This disclosure model — whether it’s intentional or accidental — is clearly not the optimal way to handle new vulnerabilities. Ideally, a researcher finds a bug, tells the affected vendors, who then produce patches in a timely manner and the details come out later. Everyone lives happily ever after. But it doesn’t always work that way. Sometimes word of the bug leaks out (Kaminsky’s DNS flaw), and sometimes the researchers deliberately reveal some details of the bug for one reason or another (clickjacking and the TCP DoS attacks). Either way, the result is often that the small bits of information available drive speculation and doomsaying, which in turn bring out the people who say this bug is: A. Nothing new; B. Not as serious as it sounds; or C. Both. Sometimes, that turns out to be the case. But other times, as in the case of the DNS flaw, the problem was not only new, but also extremely serious. But either way, the partial disclosure mode of operation exposes everyone involved to charges of fear-mongering and publicity seeking.
Now, Kaminsky is trying to halt this nascent trend by setting up a tribunal of trusted security experts — such as himself — to whom researchers can show details of bugs that they consider to be potential Internet-killers under the cover of a non-disclosure agreement. What they’ll do with the details and how they’ll disclose them if they deem the bug to be an Internet-killer isn’t clear. Here’s what Kaminsky says about his idea:
Members of this council will have to have publicly presented work in the subject area that is under consideration. I’ve spoken to a decent number of people, and everyone is somewhere between very pissed and legitimately afraid of a flood of unjustified partial disclosures.
Faced with an unending stream of “is the Internet dead yet?” Slashdot posts, everyone I’ve spoken to appears fully on board with providing an honest judgement regarding the legitimacy of findings.
Now, I expect we will reject, out of hand, almost all claims. But we will do so, with the full technical argument brought by the finder, rather than presumptions based on old flaws. Attacking the strawmen implied by partial disclosure is a losing scenario for literally everyone involved.
This is an interesting idea, especially given that it comes from Kaminsky himself, who has fallen on his sword repeatedly in the last few months for talking publicly about the DNS bug without having had anyone else in the security community review the details. (He eventually did give the details to Tom Ptacek and Dino Dai Zovi, who vouched for the seriousness of the vulnerability.) Dan’s rationale is mostly sound. He says that unless some kind of independent authority is set up to verify the claims of researchers who say they’ve found killer bugs, inevitably someone will game the system and simply do the following: claim to have a monster flaw, dole out a few juicy details to the press, then sit back while admins panic and rush off to buy security gear from the researcher’s company to fix the imaginary (or semi-real) problem.
Dan is exactly right in saying this scenario is a very real possibility. I’ve been writing about security for about eight years and I know a lot of the researchers and industry executives and other players well. I understand the technology pretty well, but I’m not an engineer or a computer scientist, so I rely on the people I talk to for explanations and context. So it’s certainly not out of the realm of possibility that a researcher could take me or any other reporter for a ride with a description of a fictional bug or attack. That’s why I check these stories with experts I know and trust. That’s the best defense.
But there’s another factor in play that I think mitigates against what Dan is worried about, and that’s the fact that any researcher pulling that kind of stunt has far more to lose than he does to gain. Let’s use Dan as an example. He has spent a lot of years building up his reputation in the security community, and people tend to take what he has to say on certain issues seriously. So if he uses that credibility in order to hype some bug that turns out to be insignificant or even imaginary, any short-term gain he would’ve gotten from the publicity would be completely wiped out by the resulting backlash. For someone who is always in the news anyway, the way that Dan is, there’s no percentage in that play. And even for an unknown researcher looking to make a name for himself, the negatives far outweigh the positives in that equation.
I agree with Dan’s premise that partial disclosure is counterproductive in most cases, but I’m not sold on the idea of a Justice League of the Internet parceling out information as it sees fit. One of the reasons why things work relatively well right now is that the specter of public embarrassment for falsely hyping a bug looms large. And that’s not likely to change anytime soon.