This tip is part of SearchSecurity.com's Integration of Networking and Security School lesson, Application and...
network log management program planning. For more learning resources, visit either the lesson page or the Integration of Networking and Security School main page.
The right log management tool can go a long way toward reducing the burden of managing enterprise system log data. However, the right tool can quickly become the wrong tool unless an organization invests the time and effort required to make the most of it. Diana Kelley offers six log management best practices to ensure a successful implementation.
A fool with a tool is still a fool -- Don't spend millions on a log management system if you're not prepared to invest the time in installing and managing it properly. Log management systems must be configured to parse events and data that matter to the organization so that reports have business and technical value. Another "fool" mistake is failure to look at and review the alert console, thereby missing critical security events. Don't make the mistake of committing to log management technology without committing the time necessary to use it well.
Pre-define requirements to streamline RFPs -- Creating RFPs is a time-consuming process, but some requirements, once defined, can be re-used in subsequent RFPs. This is often the case with logging requirements because the baseline of what's needed (format of the log file, data written to the log file, etc) remains the same. Another benefit of using pre-defined requirements is that it ensures the requirements remain consistent while streamlining the RFP cycle.
Make sure you have the information you need -- To be able to write effective correlation rules, the log management system must have enough contextual data to analyze. For example, where specifically did the traffic or activity come from? This requires knowledge of the source IP address, which means the log management systems must be logging that information in order for the engine to be able to parse it. What happened on the target device or application? If an organization wants to write log analysis rules and alerts for activity, the log data must record that activity.
Think beyond static reporting -- The last thing most organizations need is another list or spreadsheet filled with rows and rows of data that has no overarching analysis model to help make sense of it all. Alerting should be done not just on "the characteristics of individual rows but also on sets" and baselines of expected or acceptable activity. Consider logins to a critical database. The normal baseline may be two failed logins, but if the password requirements for that system are changed from a simple dictionary word to an 8+ character non-dictionary string, login failures may be expected to increase while users get accustomed to the new rules. Intelligently aware log management systems could be tuned to monitor trends and provide feedback to the administrators who may decide to use the trending information to temporarily alter the alerting threshold.
Use log data to figure out what is happening or what just happened -- "Logs are wonderful for outages," because, very often, all of the information necessary to determine what is causing (or caused) the outage can be found in the log files themselves. During a crisis, staff often goes into reactive mode, sometimes relying on intuition, speculation, and atomic unrelated pieces of information to piece together what is going on or what happened. But logs are a record of what actually happened. Systems that allow staff to write and run reports in real-time based on outage information deliver the facts that response teams need to understand what's happening on the network.
Think outside the security box -- Log management systems are excellent for aggregating and analyzing information from security devices for security awareness, but the information being gathered can be used for other purposes as well. For example, an organization "can analyze the customer experience for [your] top ten business relationships." Many trending and click track type Web application-reporting systems don't provide a granular view of the actual customer experience. "Well-designed application logging would take the customer experience into account," and expands the utility of the log management well outside of the security box.
About the author:
Diana Kelley is a partner with Amherst, N.H.-based consulting firm SecurityCurve. She formerly served as vice president and service director with research firm Burton Group. She has extensive experience creating secure network architectures and business solutions for large corporations and delivering strategic, competitive knowledge to security software vendors.
Video streaming service Hudl manages 1,000 servers with log analytics