Filtering log data: Looking for the needle in the haystack

In this illustrated tip, network security expert David Strom demonstrates how to use a log-filtering tool to quickly make use of all those voluminous log files.

Where there are logs, there is usually an overwhelming amount of log data. This makes it hard for an organization to spot security problems. How do you find the one packet among millions that indicates someone is sending proprietary information out of the enterprise?

More on logging and audit:

In this excerpt from our Ask the Experts section, Mike Rothman explains how to prevent audit-logging systems from storing passwords.

Let's illustrate how it is possible to drill down and find that single suspect packet through a series of screenshots. As an example interface, we'll use NetIQ's Security Manager v 6.0 to demonstrate the filtering process, but other vendors in this market offer similar interfaces and capabilities. Regardless of the product your organization uses, this tip will provide a blueprint for how to drill down and obtain the log information you need.

Let's start with the main dashboard screen. Here you can get a quick summary of all current security alerts, whether the various agents are operating properly, and what alerts have yet to be resolved.

If we drill into the open alerts, we see the next screen, which shows all unresolved alerts, as well as links to a corporate knowledge base.

In the illustration, a user has made changes to a sensitive file and that is why it is flagged as an alert. As we move further into analyzing what happened with this file, the next screen shows where we see what the file was and who renamed it and what program (in this case, Windows Explorer) did the deed. In this screen, we see that the event was "unmanaged," meaning that it wasn't authorized.

Many log-management tools also can collect reports from other network-protective devices, such as firewalls and IDSes. In the next screen, we see the collection of log data from a sample Cisco PIX firewall during the last two hours. The particular event that is highlighted in the screenshot is an access denied request.

One part of any good security management tool is the ability to make some sense about disparate events that happen on the network that may have a single underlying cause. A number of log-management tools have correlation and analysis tools that can be used to determine what happened after the fact. In the next screen, we can construct custom queries, which in this case we would use to determine if three particular events occurred within 30 seconds from the same source IP address.

So far we have explored the logging data, and conducted some lightweight analysis. Next we'll start to dig deeper to find out what kind of activity is really going on. The next screen is good place to start comparing our logs and normalizing them for further analysis. You can rotate the graphical display, add and delete columns and rows and do other manipulations to expose patterns and potential network issues.

Many of these log management tools have reporting capabilities as well. For instance, when an employee may be suspected of wrongdoing, it's possible to use a forensics wizard to filter the logs for all traffic originating from the suspect user's ID.

When this report is run, we get the results in the next screen. From there,it's possible to expand and sort the various lines in the report and drill down further to determine exactly what the mysterious J. Smith has been up to.

These types of reports are useful when you know ahead of time what to look for, such as providing evidence for an electronic discovery request or other external reasons.

All log analyzers are actually mini development environments and come with report builders, rule creation engines, alert managers and other forensic add-ons. With some products, separate pieces of software are needed to run these tools, while others have some of them built-in. For example, you have rule sets that can control what information is filtered, what alerts are activated and how often they are acted upon, and how information is aggregated for auditing or compliance purposes. In addition, there are other tools to automate or script some of the more common processes for collecting data or parsing the logs from various servers and network devices.

About the author:
David Strom is one of the leading experts on network and Internet technologies and has written extensively on the topic for nearly 20 years. He has held several editorial management positions for both print and online properties, most recently as Editor-in-Chief for Tom's Hardware. In 1990, Strom created Network Computing magazine and was the first Editor-in-Chief establishing the magazine's networked laboratories. He is the author of two books: Internet Messaging (Prentice Hall 1998) which he co-authored with Marshall T. Rose and Home Networking Survival Guide (McGrawHill/Osbourne; 2001). Strom is a frequent speaker, panel moderator and instructor and has appeared on Fox TV News Network, NPR's Science Friday radio program, ABC TV's World News Tonight and CBS-TV's Up to the Minute.

This was first published in October 2007
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close