Our antivirus deal is almost up and we're evaluating new products and services. Not surprisingly, we have found...
that each vendor says it has the best research team. Are there metrics we can request from the vendors so that we can determine which research team is, in fact, more successful at identifying malware? Or is that even necessary? Has anti-malware reached the point where, similar to weather forecasting, they are really all the same and working off the same data?
Ask the expert!
Have questions about enterprise information security threats for expert Nick Lewis? Send them via email today! (All questions are anonymous).
It sounds like your organization is doing its due diligence in critically evaluating your anti-malware products to determine the best fit for your organization. That's a good thing. With the frequency of changes in anti-malware protections, when choosing the best enterprise antivirus product for its needs, every enterprise should conduct a thorough anti-malware comparison and choose the most cost-effective product that provides the necessary protection. Many vendors have extended their traditional antivirus products into endpoint protection suites. There are a number of different organizations that evaluate the effectiveness of anti-malware protection: Virus Bulletin, AV-Comparatives, AV-test, West Coast Labs, Gartner Inc., and others.
The best AV research team might help drive product innovations and advanced research, but research should only be part of an antivirus evaluation. In the anti-malware research community, vendors share samples, and each evaluates the sample to determine how to detect, block or remediate the malware, but little is publically documented about the whole process. While detecting malware first does provide some advantages in producing protections, using a product that has a protection in place first doesn't help if the endpoint doesn't receive the update.
There are numerous metrics that can be used to evaluate the effectiveness of different anti-malware research teams that involve looking at features of the vendors' products, including: the number of false positives, detection vs. blocking, and the average time from customer malware submission to detection deployed to endpoint. The number of false positives reveals any possible issues with the quality assurance process. Detection vs. blocking indicates the effectiveness of the malware analysis and whether the company is able to create detection but not completely determine what is necessary to clean a system. The average time from customer malware submission to detection deployed to endpoint could help enterprises more effectively protect their endpoints by producing detection for an enterprise that used custom malware and getting that detection deployed to their endpoints.
Dig Deeper on Malware, virus, Trojan and spyware protection and removal
Related Q&A from Nick Lewis
A new remote access Trojan called UBoatRAT was found spreading via Google services and GitHub. Learn how spotting command-and-control systems can ... Continue Reading
CyberArk researchers created an attack called Golden SAML that uses Mimikatz techniques and applied it to a federated environment. Learn more about ... Continue Reading
The use of botnets to spread Scarab ransomware intensifies the threat for enterprises. Discover the best way to respond to such a threat and protect ... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.