Every day, in computer virus research laboratories around the world, the same scene is repeated by dozens of specialists. While the specific procedures used may differ, essentially the same work is carried out in a redundant, out-dated attempt to create some commercial advantage for each company paying the costs of maintaining these teams. It is time that a fresh approach is considered that will benefit not only the consumers of antivirus products, but the researchers and their employers.
Department of Duplication Department
For each computer virus research lab, a fairly standardized routine is used to handle new viruses. When a sample of a suspicious file is received, it enters into the processing queue. The file gets assigned to a virus researcher, who then uses the various tools at his disposal to perform an analysis of the file. If it is found to be a virus or other similar malicious code, further research is conducted to determine its characteristics as well as to develop detection capabilities. The time and effort involved varies greatly, depending on the skills and experience of the researcher,the tools available, the workload and the difficulty of the sample under analysis.
Some virus research labs have introduced automatic pre-analysis tools using neural networks and artificial intelligence, which attempt to determine if a sample is a virus without human intervention. Based upon my tests and feedback from users, this has not been all that successful
With every single new malicious code sample being analyzed by at least 30 different teams of researchers around the world, one might think that the competition has resulted in an increase in excellent analysis. Unfortunately, this is not the case. Pick any recent critter that has caused problems as your control and compare the descriptions on each antivirus vendor's Web site. These descriptions will differ, they will contradict, they will have huge variances in quantity and quality of information and, in many cases, will be incomplete.
Revisit the descriptions one year later, and you'll find that often nothing has changed. Off the record comments from researchers I've spoken with indicate that the pressure to handle the constant inflow of samples makes it very hard to find the time to correct mistakes or add information that had arrived in the meantime.
All together now
I'd like to propose that virus research labs stop duplicating their efforts and find ways to create a new, more efficient model for handling new viruses. It is time for the various antivirus companies to consider the concept of sharing the virus analysis load. (Given the politics and sharp knives common to the industry, I've put on my asbestos underwear and bulletproof clothing for the rest of this article.)
Over the years, there have been several attempts to get something as simple as a program for sharing of virus samples established. Each has sadly failed to meet its goals, mainly due to problems surrounding issues of trust, marketing exploitation of new samples and questions of competence. As a byproduct, the amount of information shared amongst virus researchers is mainly determined by personal connections and informal alliances. There are some industry-insider mailing lists, which if one knows the secret handshake and has the right connections, one can get invited to join. Meanwhile, the virus problem gets worse and worse.
The sad thing is that there are more than enough qualified virus experts out there to do a wonderful job of dealing with the malicious code problem. The unfortunate thing is that they are spread around too many companies and have their hands tied with regards to how much they can help each other officially. Wary of the Not-Invented Here Syndrome affect prevalent amongst smart people like those doing virus research, I have to be careful not to provide an actual answer, lest the powers that be reject it. With that in mind, I'd like to merely suggest that some sort of cooperative research group be established to which each antivirus company (and others) could second one or more researchers for a year or two. This research group would take on the task of analyzing new malicious code, providing all participating members with whatever information they need to detect it using their product.
This group could also provide in-depth information on each piece of malicious code to the general publicvia a Web site. They could publish relevant information for the user community to be able to detect and stop these critters using non-antivirus technologies (filtering, for example).
They could also publish detailed information on how the critter affects infected systems (not the details that help other virus authors, but those that help in the clean-up and defense). There may be those who will reject out of hand the notion of a total centralization of the virus analysiseffort. I'd like to suggest that they start up their own consortium to compete with the first one and drive up the quality while lowering costs through efficiency. Even if we ended up with four of these groupings, we'd be better served than by having as much duplication as we have today.
Reducing the number of researchers performing duplicate work would open up the door for those remaining to dedicate themselves to more detailed analysis, improving the existing documentation on malicious code in the wild, more thorough education efforts on prevention and who knows what else they would dream up to help users fight virus attacks.
About the author
Robert Vibert is author of "The Enterprise Anti-Virus Book" and more than 180 articles on computer security and management. He currently serves as moderator of the AntiVirus Information Exchange Network (www.avien.org) and occasionally can be spotted performing benchmark audits of malware defenses in large organizations.
For more information, visit these resources:
- Featured Topic: Focus on viruses
- Tutorial Test: What to do about malicious code
- Tutorial Test: Malicious code -- What's what
This was first published in May 2003