Let us now praise the efforts of noble men. Dan Kaminsky, Paul Vixie, CERT and nameless dozens of engineers and admins at ISPs and backbone providers around the world did a tremendous job pulling together a massive, coordinated response to the
The right people were notified quietly, the problem was explained, a fix was devised, and the patch was applied in all the critical spots in an astonishingly short amount of time. And while Kaminsky took heat for over-hyping the severity of the problem in the hopes of pumping up the attendance at his Black Hat talk, other researchers who had been briefed on the problem came forward and said, "Look, this is a serious problem. Go patch. Right now." It looked like everything had worked out smoothly and the furor was starting to die down as the community waited for Kaminsky to release the gory details next month.
About Behind the Firewall:
Security data lapses hamper researchers
Like MLB scouts, IT security pros are turning to metrics
Hannaford breach illustrates dangerous compliance mentality
Shrewd attackers bypass old security defenses with Web attacks
And then in the space of a few hours on Monday, all hell broke loose. First, noted reverse engineer Halvar Flake posted a description of what he thought the DNS flaw was. He hadn't been briefed on the details, and said in his post that he might have been wrong about how the exploit worked. But it turned out he was right. Then things got really interesting. Tom Ptacek of Matasano Security LLC, who had gotten the skinny on the vulnerability from Kaminsky, posted a confirmation of Flake's analysis, along with some of his own thoughts on the problem. Ptacek quickly retracted the post and later apologized for publishing it, saying the error was "painful both personally and professionally."
The good news in all of this is that the cat didn't claw its way out of the bag until well after Kaminsky et al had done their work behind the scenes to ensure that the most important DNS servers were protected. The less-than-good news is that word got out, as it nearly always does, underscoring the fact that the chances of keeping a lid on something of this magnitude in today's hyper-connected world are approaching zero. Even the best efforts of a lot of smart, well-meaning researchers failed to do so, through no fault of their own.
Which brings us to the real crux of the issue: Is there still some value in the kind of controlled disclosure policy that Kaminsky tried to follow? I've argued both sides of this question with researchers, vendor security officials, enterprise security managers and hackers dozens, if not hundreds, of times in the last decade and at this point, I think the answer is a qualified no.
Let's assume that a researcher discovers a new vulnerability in a widely deployed piece of software. Let's further assume that the researcher's heart is pure and he wants to keep the specifics private until the affected vendor can push a fix to its customers. But in advance of the patch's availability, some basic information on the vulnerability makes its way out. The researcher posts it in a forum, on his blog or the vendor publishes some advance notification on the fix. Very quickly, people on both sides of the fence start investigating the problem, and it's a pretty safe assumption that some of them are going to figure it out.
By going this route, the researcher and the vendor essentially are betting that no attacker will discover the vulnerability before the patch is available. That's a really bad bet these days. There are simply too many smart, well-financed attackers out there with more than enough time and resources to throw at the problem. For something as serious and widespread as the DNS vulnerability, the potential payoff is enormous, far outweighing whatever time and money it costs to do the upfront work.
Flake, who is known as much for his integrity as his technical skill, argues the point similarly, and probably more convincingly. "I know that Dan asked the public researchers to 'not speculate publicly' about the vulnerability, in order to buy people time. This is a commendable goal. I respect Dan's viewpoint, but I disagree that this buys anyone time (more on this below)," he wrote in his post about the DNS issue. "I am fully in agreement with the entire way he handled the vulnerability (e.g. getting the vendors on board, getting the patches made and released, and I understand his decision not to disclose extra information) except the proposed 'discussion blackout.' In a strange way, if nobody speculates publicly, we are pulling wool over the eyes of the general public, and ourselves."
In short, just because the good guys don't know the details and aren't talking publicly about what they think the details are doesn't mean that the bad guys don't already know or aren't trying hard to find out. (As there's almost no way to know whether some subset of attackers knew about the DNS flaw before Kaminsky discovered it, we'll avoid going down that rabbit hole.)
So does this mean that researchers should give up the goods as soon as they find a new flaw? No. But what it does mean is that no matter how good their intentions and how hard they try to prevent it, researchers—and admins—have to assume that the details of any new vulnerability either are already known in the hacker community or soon will be. It also means that any effort to prevent others in the legitimate security community from working out the problem is a waste of time. Smart, intellectually curious people are attracted to a difficult problem. It's just a matter of time before one of them solves it.