Monitoring missteps can affect accuracy of audits
One often overlooked shortcoming of DAM software products is network monitoring. For non-critical database infrastructures, network monitoring to collect SQL activity remains viable. For compliance initiatives, an agent installed on the database platform to monitor all connections, including administrative activity, is a preferred choice for a DAM software product.
These collection methods gather raw SQL statements, including the variables embedded in the query that native auditing and most other data collection options do not store. However, with network traffic monitoring and agent-based deployments, the accuracy and completeness of the data collected varies considerably. Under loads, some implementations miss packets. Since a missed query is seldom noticed, no one complains. But when auditing transactions for Sarbanes-Oxley compliance, for example, missing transactions render reports invalid.
It is for these reasons that most compliance-driven deployments will combine a network agent or memory scanner with native audit data collection. Combining the accuracy of the native audit trail with the data contained in the original query provides the best of both worlds.
Policy overload and performance overhead
Performance is still a concern for database activity monitoring. As with any security product, as the number of policies deployed from the product increases, so does the overall computational overhead required to analyze activity. Every query or transaction that is collected is compared against all policies. Going from 20 to 40 policies has the same effect as going from 2 million to 4 million transactions per day. Thus, DAM software performance ceilings are just as likely to be caused by having too many policies as collecting too many transactions.
Here are a couple of guidelines to follow to ensure acceptable DAM software performance:
- Behavioral policies require that a behavioral profile be accumulated, along with the current activity. Keep behavioral profiles to a minimum, as analyzing these is more complex.
- Determine when the DAM software analyzes policies. Is analysis performed at the time records are collected, or after they are stored within the DAM product? Additional overhead in terms of latency of storing and then re-querying collected data is likely a poor design choice on the part of the vendor.
- Optimize policies so the fastest and easiest part of the comparison is first. Just as with query optimization, the way the rule is written has a dramatic impact on performance. Review and optimize policies, and, if necessary, have the vendor rewrite inefficient rules.
About the author:
Adrian Lane is CTO of consultancy Securosis.
This was first published in May 2010