I will never understand the claim that someone only wrote a virus or other computer attack as a "proof of concept."...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Authors of a proof of concept virus for cell phones recently claimed they just wanted to demonstrate that it could be done, and say it benefits the computer security field. Another company wanted to prove that you can bypass the security of Windows XP SP2. My response: Please don't do me any favors.
So-called proofs of concept have been around for awhile. While it is otherwise completely legitimate to search for security flaws in products, putting together a proof of concept, then distributing and publicizing it is just plain wrong. Frankly, these people are enablers to criminals and vandals around the world.
Hopefully no one ever assumed that cell phones were immune from viruses. Adding Bluetooth and infrared connectivity to devices containing computer processors and software that were built to freely communicate over public telephone lines is clearly not going to help the situation. Any security professional should be well aware that any computer can eventually be abused.
There is a big difference between responsibly finding and reporting vulnerabilities, and going the extra step to put out proof of concept code. Finding vulnerabilities and getting them fixed is clearly important. When done responsibly, discoverers tend to be acknowledged in the associated vendor alerts and, if significant, tech publications as well.
However, more attention is generated by distributing actual attacks to compromise systems. When someone releases a new attack, especially within the first three months of an available patch, it's due solely to wanting exposure, or to be perceived as being elite.
There are rare exceptions, such as when affected vendors or Web sites ignore multiple communications describing the problems. However, distributing an attack before users have a reasonable time to study the problem and test and implement the fixes is completely irresponsible. This can take upwards of a month after the release of a patch in even some of the most diligent organizations. There is always a veiled claim that distributing the attack allows administrators to better study the issue, but they need to mitigate the problem, not study it.
Anyone who claims that security professionals need access to the attacks so that they can test their clients for susceptibility to the exploit doesn't understand the true job of a security professional. Security professionals need to test for the presence of the underlying vulnerability, but this can be done with a scanning tool or examining the software version and settings -- it doesn't require the exploit.
Some perform penetration testing and may need to legitimately use the attack, but I would contend that these people should be capable of writing their own attack after reviewing the documentation, use commercially available tools, or just use other exploits to accomplish their mission. The benefit provided by one legitimate use does not overcome the large scale malicious use of an attack by hackers around the world. If the attack is incorporated into worms, which happened with the Blaster worm, the damage goes into the billions of dollars.
I like to equate the work of security practitioners to doctors. Most of our work should be preventative. Sadly, like doctors, our patients ignore our advice, and we end up treating diseases. This also means that we should strive to "first, do no harm."
Some horror movies work on the premise that a doctor wants to create a miracle cure, but also creates a terrible disease to "study it." These well-meaning researchers inevitably let the disease escape, or more frequently a lunatic or terrorist purposefully releases the disease, creating havoc. Security professionals should not create horror stories just for the sake of getting the credit.
About the author
Ira Winkler, CISSP, CISM, has almost 20 years of experience in the intelligence and security fields and has consulted to many of the largest corporations in the world. He is also author of the forthcoming book, ,i>Spies Among Us.
Dig Deeper on Vulnerability Risk Assessment