The vast majority of intrusion-detection systems (IDS) available today allow system administrators some degree of flexibility in configuring the sensitivity of the system's detection and reporting algorithms.
The answer to this question is not an easy one. Indeed, there is no single answer that will apply to every situation or organization, as varying circumstances dictate different security postures. However, if you have a thorough understanding of the metrics used to evaluate these systems, you'll be more capable of determining an appropriate balance for your situation.
There are three main measures of IDS performance:
- The False Positive Rate is the frequency with which the IDS reports malicious activity
in error. These errors are the bane of a security administrator's existence. They're the "nuisance
reports" that require investigation but lead to a dead end. The true danger of a high false
positive rate lies in the fact that it may cause administrators to ignore the system's output when
legitimate alerts are raised. You may also see false positives referred to as "Type I errors," a
phrase borrowed from the field of medical research. Generally speaking, increasing the sensitivity
of an intrusion-detection system results in a higher false positive rate, while decreasing the
sensitivity lowers the false positive rate.
- The False Negative Rate is the frequency with which the IDS fails to raise an alert when
malicious activity actually occurs. These are the most dangerous types of errors, as they represent
undetected attacks on a system. The corresponding term from medical research is a "Type II error."
False negative rates change in an inverse proportion to false positive rates. As the false positive
rate increases, the false negative rate decreases and vice-versa.
- The Crossover Error Rate (CER) is often used to provide a baseline measure for comparison of intrusion-detection systems. As the sensitivity of systems may cause the false positive/negative rates to vary, it's critical to have some common measure that may be applied across the board. The CER for a system is determined by adjusting the system's sensitivity until the false positive rate and the false negative rate are equal, as shown in the figure below. You may then evaluate several different IDSs by running them on the same network and measuring the CER for each. If you're interested in achieving a balance between false positives and false negatives, you may then simply select the system with the lowest CER. On the other hand, if detecting every single attack is of the utmost priority, you may still wish to select the system with the lowest false negative rate recognizing that this selection may increase the administrative overhead associated with false positive reports.
Hopefully, these metrics have given you a good idea of how to properly evaluate an IDS. Keep in mind that there is no single solution suitable for every situation. You must carefully analyze your security requirements and determine the appropriate balance of false positive and false negative potential for your environment.
About the author
Mike Chapple, CISSP, currently serves as Chief Information Officer of the Brand Institute, a Miami-based marketing consultancy. He previously worked as an information security researcher for the U.S. National Security Agency. His publishing credits include the TICSA Training Guide from Que Publishing, the CISSP Study Guide from Sybex and the upcoming SANS GSEC Prep Guide from John Wiley. He's also the About.com Guide to Databases.
For more information, visit these resources:
- Ask the Expert: IDS data to include in monthly report
- News & Analysis: Recommendations for deploying an intrusion-detection system
- Featured Topic: IDS vs. IPS
This was first published in August 2003