Ezine

This article can also be found in the Premium Editorial Download "Information Security magazine: Betting the house on network anomaly detection systems."

Download it now to read this article plus other related content.

Technology

Requires Free Membership to View

Forecast

Many organizations will find network anomaly detection systems (NADS) useful as complementary controls to firewalls and IDS/IPS implementations. However, convergence in both the technology and its vendors continues. Eventually, NAD technology will become a feature set of comprehensive network security suites.

NADS providers have begun adding the ability to accept other data sources, such as firewalls, creating an overlap with security event management (SEM) products. SEMs, meanwhile, are adding statistical analysis and are starting to overlap with NADS. Finally, IDS/IPS vendors such as Sourcefire and Enterasys Networks (through a partnership with Q1 Labs) are adding network flow analysis to their products' capabilities.

The convergence of anomaly detection with conventional security technologies is best illustrated by several emerging and established vendors. ConSentry Networks is blurring the line between signature-based IPS and NADS with a hybrid inline system that analyzes and controls traffic. Startup Intrusic operates out of band, but directly analyzes packets for high-value patterns of interest like reverse tunneling.

While these capabilities may not survive as standalone systems, they have proven their worth in assisting organizations with monitoring requirements. A decade of learning has taught us that monitoring is fundamentally flawed and that separating good from bad behavior is significant. Network anomaly detection systems offer a different view of network activity, which focuses on abnormal behaviors without necessarily designating them good or bad.

-PAUL PROCTOR

NADS Is Limited
Security incident detection typically falls into two categories: signature- and anomaly-based. These terms are so overused that they have lost all common understanding. There are several technology implementations and techniques that fall within these two categories, including mechanisms — protocol analysis, expert systems and artificial-intelligence techniques (neural nets, Markov chaining, etc.) — but there are some fundamental properties of each.

Signature mechanisms detect patterns of predefined sequences in data — event A followed by event B followed by event C. If implemented and tuned properly, signature mechanisms have relatively few false positives. Of course, they will only detect attacks and incidents for which they have signatures, so the must be kept up to date.

Anomaly mechanisms model good behavior and look for deviations from the baseline. For example, an observation of network traffic shows that event B has always followed event A, so it would be considered anomalous if A followed B. This can be rife with false positives because it is difficult to model all possible good behavior. It requires stable systems that maintain a consistent model of what constitutes good behavior; the trick is figuring out which events should be measured and which are significant.

True anomaly detection means that observable (normal) behavior is modeled, and departures from that behavior are flagged. For example, certain identifiable protocols (TCP, IP, HTTP, ARP) are present at certain times (hours in the day, day of week) and at certain throughputs (100, 200, 500 Mbps). Once a baseline of behavior has been established, an anomaly would be anything that departs from this pattern — like the FTP protocol appearing or traffic volume changing during the day. These anomalies ostensibly indicate occurrences on the network that may be of interest to the security team.

NADS use network flow data to track predefined measures for establishing a baseline of normal behavior and anomaly analysis. Most of these systems use the concept of an index to indicate confidence in both how anomalous something is and how sure the system is that the anomalous activity is relevant and security-related. NAD's implied advantages include reduced tuning complexity, because it's not prone to human error, and zero-day detection, since it's not reliant on predefined signatures. However, the value of these advantages is significantly impacted by the volatility of enterprise networks. NADS suffer from high false-positive rates when "good" behavior can't be effectively modeled. Factors that affect network behavior modeling include the following:

  • The number of possible behaviors and event types. Larger numbers of behaviors and event types yield more combinations of good and bad, which adds to complexity.
  • The stability and consistency of the environment. A network with poor change control (common in many large enterprises) or a highly dynamic environment may have applications, partners or devices regularly changing traffic patterns, which reduces the model's reliability.
  • The stability and consistency of network activity. A network subject to bursts of activity of varying types will make modeling both good and bad behaviors difficult.
  • The reliability of bad behaviors. When unusual behavior isn't always bad, the gray areas can weaken the modeling and increase false positives. For example: Is it always bad when a computer that has never used a particular protocol starts to use it?

Anomaly detection is really only useful in stable environments where "normal" behavior can be effectively modeled. NADS vendors will admit that their products are only useful in controlled environments, such as an LAN, and unsuitable in unregulated settings, such as at the gateway. A good example is network protocol anomaly detection (provided by most contemporary IDS, IPS and NADS products), where the roughly 300 RFCs provide a model of normal behavior in which departures from the model are significant and useful for detecting attack traffic. Of course, many application developers regularly violate the RFCs, further illustrating the limitations of modeling even within well-defined rules.

While an argument could be made that tracking a sufficiently specific set of criteria could yield more useful results, experience has shown that more narrow criteria increase the number of events that must be investigated and the number that aren't truly of interest. For example, it may be suspicious if the application development LAN FTPs from the finance LAN, but this implies that new FTP connections, while clearly anomalies, are suspicious, and you will get flagged every time an FTP connection is made that exceeds defined thresholds. Some of those connections will clearly be interesting, suspicious and useful to know about; however, the majority will be acceptable business activity.

NADS users report that narrowly defined activities, like a new service starting, a new protocol appearing, or a new port opening, are interesting in theory, but that granular anomaly detection isn't useful in a large enterprise where those activities are common. Therefore, the usefulness of NADS in any particular environment is a function of the stability of the network, which behaviors are modeled, and the security manager's knowledge of the organization's network and the meaningfulness of the deviations. Enterprises that have had success with NADS are the ones that have done the most effective job of balancing and tuning these parameters.

This was first published in July 2005

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: