To get security news and tips delivered to your inbox, click here to sign up for our free newsletter.
When Microsoft released an emergency patch last month for a critical vulnerability in the server service in Windows, administrators and security teams in enterprises around the world scrambled to test the fix, schedule downtime and get the patch distributed as quickly as possible. If ever there was an occasion to use all due haste in deploying a patch, this was it. Not only was the vulnerability present in every supported version of Windows, but Microsoft officials had warned that it was a prime candidate for a worm.
But by the time IT staffs and end users got their hands on the MS08-067 patch , it was too late. There were already attacks against the vulnerability happening in the wild, and Microsoft itself had learned about the flaw through the observation and reports of these attacks. And, within hours of the patch hitting the streets, a worm exploiting the flaw was seen, as well. In short, the attackers had a long head start on the rest of us.
About Behind the Firewall:
MIT case shows folly of suing security researchers
Security data lapses hamper researchers Like MLB scouts, IT security pros are turning to metrics
Security measures pose risk of government control of cyberspace
This is not exactly a recent development. Anyone who has been involved in the security world for any length of time understands implicitly that defense is by necessity a reactive discipline. An attacker makes a move, you react and make a countermove. And so on and so on. This leads inevitably to the messy, inefficient security model that we have now: A new threat arises, a new product/technology/technique emerges to address that threat. Lather, rinse, repeat.
The faulty assumption in all of this, however, is that our reactive moves are keeping us on an even keel with the attackers. The truth is, not only have the attackers won the game, it was never really a contest to begin with. The game was rigged from the start.
For a security team trying to secure a given network, it is a never-ending task. Each time a new vulnerability is revealed, it must get right to the task of identifying and patching every vulnerable machine on the network, or risk being compromised. This cycle is repeated over and over for flaws in operating systems, applications, hardware and even the DNS system. But patching new vulnerabilities—or more accurately, newly publicized vulnerabilities—is just window dressing. Attackers will happily go after new flaws, especially if there is reliable exploit code available and plenty of targets from which to choose.
But there are so many old and unpublished vulnerabilities available to attackers, that there's little need for skilled, professional hackers to even bother with the new flaws on the block. Consider again the MS08-067 vulnerability. A couple of weeks after the patch for this problem was released, Microsoft released its usual monthly batch of fixes for November, which included a patch for a problem in its Server Message Block protocol. That flaw was first identified more than seven years ago and had been discussed on mailing lists and in security advisories in detail. The problem was well understood and Microsoft acknowledged the weakness, but had been unable to fix it without breaking a number of other things.
So the flaw remained unpatched and millions of corporate systems remained at the mercy of the attackers. And that's just one of an unknowable number of vulnerabilities floating around out there in the ether that are at the disposal of whomever has the good fortune of stumbling across them. While that number may be unknowable, it certainly is not insignificant. There are a number of successful businesses built specifically on the ability to find and exploit these vulnerabilities in corporate networks before the attackers do. And it is a very lucrative business.
I had lunch recently with several security researchers with hundreds of penetration tests and security assessments among them, and they said there was no shortage of zero-day vulnerabilities out there. They spoke casually of the number of flaws available for sale and how many unpatched vulnerabilities they had access to. And they agreed on two things: the threats you know about are not the ones you need to worry about; and every network is own-able. Every. Single. One.
The key point in all of this is that in order to be successful, an attacker needs just one unprotected vulnerability. He doesn't need a massive, Cheesecake Factory-size menu of flaws to choose from; all he needs is the one exposed soft spot, and he's off to the races. Security teams, on the other hand, need to protect against every possible attack vector. Make one small mistake and you're in line to be the next TJX. It is not a fair fight.
Does that mean it's time to stop fighting? To some degree, I think the answer is yes. If you accept the premise that it's not possible to protect every asset (or even protect any single asset completely), then the logical action is to identify the most valuable assets and secure them to the best of your ability. Many organizations have been doing this kind of prioritization and triage for years, but a lot of others are still desperately running around, trying to patch every box every time. That strategy is often driven by regulatory compliance these days, and can wind up being counterproductive, taking time and resources away from the truly critical operations. With regulatory pressure likely only to increase in the coming years, it's unlikely we'll see a major shift in this thinking in the near future.
And so we'll go on fighting a war that was decided before the first shot was fired.