Within minutes of its release, SQL Slammer caused Internet connectivity to drop 15 percent globally and by as much as 60 percent in some locations. It was particularly nasty for large corporations with multiple intranet connections to partners and suppliers. Latency tripled, and many applications timed out. More than 13,000 Bank of America ATMs went down for several hours.
And yet, like hundreds of worms and viruses before it, Slammer need not have happened at all.
While media reports largely focused on the technical reasons why organizations were vulnerable to Slammer, few have discussed the root problem: the person who released the worm. This person violated a fundamental ethical rule--Kant's Categorical Imperative, which cautions us not to behave in a manner that we wouldn't want everyone else to behave. If we all behaved in such a manner, the Internet would be unusable. Indeed, it wouldn't exist.
But part of the responsibility also rests with the person who made a conscious decision to move the vulnerability from "known to few, and not a problem" to an attack that crippled the Internet. The worm was based on code written and published by David Litchfield of NGSSoftware. Litchfield eagerly shared his knowledge and his work. The result of his labors made it easy for someone without his knowledge and skill to exploit the vulnerability in disastrous ways. The public release of vulnerability information--regardless of whether it has a corresponding fix--is often performed by self-styled "security researchers" for small, obscure firms. Large, prestigious firms never do it. These small firms have concluded that they will gain more business from the recognition than they will lose from the notoriety. To the extent that we contribute to that belief, we share part of the responsibility.
Those who publish vulnerabilities claim they do so in the name of security. They insist that vendors, Microsoft in particular, wouldn't otherwise be motivated to produce quality code or fix vulnerabilities. They claim that they are bound by professional ethics to do so because professionals share their knowledge.
This problem isn't new. Most professions have to cope with how to share information with the good guys while not leaking it to the bad guys. Most have come down in the same place: the professional shares his knowledge, skills and abilities with his principals and his peers. Not only is he not obligated to share with others, but in most cases he is ethically prohibited from doing so.
Information security is no different in this sense from other professions, yet the "open disclosure" debate rages on.
After Slammer hit, Litchfield reportedly regretted publicizing the vulnerability. "We often forget that our actions online can have very real consequences in real life--the next big worm could take out enough critical machines that people are killed," he wrote. "I don't want to feel that I've contributed to that." Later reports suggested that he changed his mind, prompted in part "by the hundreds of e-mails...encouraging [him] to keep publishing exploits."
Most of us learn what we need to know about ethical behavior in the sandbox and kindergarten. From a very young age, we intuitively know the difference between right and wrong, and we behave well out of habit.
But neither intuition nor habit serves us well when it comes to knowing what's ethical in an environment like the Internet. Here we need analysis, analogy and history. Let us hope that it will take less than someone's death for Litchfield and others to understand and apply these lessons in support of the common good.
About the author:
William H. Murray is a management consultant and trainer in information assurance specializing in policy, governance, and applications.