Many organizations depend upon content filtering and usage monitoring packages to enforce their acceptable use
policies. These packages play two important roles:
- They deter unacceptable activity by providing users with the knowledge that their activity is being monitored. After all, not many people will intentionally violate a policy if they perceive a significant risk of being caught.
- They detect unacceptable activity when it does occur. These systems allow you to take appropriate administrative and technical action when a policy violation occurs.
Take a moment to reflect upon the systems that you have in place in your current environment. Here's the critical question for today – who analyzes the results using what criteria? The interpretation of system output is a critical (and possibly weak) link in the chain of acceptable use policy enforcement. If you don't have appropriate controls in place you have a clear vulnerability that is ripe for exploitation.
Consider a scenario where you have a single system administrator reviewing the content monitoring system output and providing feedback to upper management when he or she detects unacceptable use (based upon a subjective "gut call"). There are two potential problems in this case. First, the administrator may selectively enforce the policy, allowing his or her buddies to slide while subjecting political enemies to extreme scrutiny. Second, the administrator is forced to make a value judgment potentially without all of the necessary background information.
There are a few simple steps that you can take to minimize the impact of this vulnerability on your organization:
- Divide analysis responsibilities among two individuals (who, preferably, do not work with each other.) The duties should be separated in such a manner that each individual periodically works on each segment of the analysis. In a simple scenario, you could simply have the individuals take turns conducting the analysis each week. Alternatively, you could divide the analysis into two discrete halves (perhaps by user base) and have them rotate halves each week. This minimizes the "protect your buddy, hurt your enemy" syndrome.
- Use objective criteria to flag potential misuse. Administrators should have a clear set of guidelines that they can follow when determining whether they should flag a potential misuse event. These criteria may vary depending upon the type of activity. For example, you might say that even a single hit on a pornography site might be investigated while browsing news/sports sites may only be investigated if it consumes more than 5% of a user's time during working hours.
- Convene a separate committee responsible for confidentially reviewing potential misuse cases flagged by the administrators. This committee should include individuals with specific knowledge about the job responsibilities of different users and a firm understanding of the corporate acceptable use policy. They do not necessarily need to be technical specialists.
As with any security policy or procedure, it's critical that you implement controls to ensure consistent, effective enforcement of your acceptable use policy.
If you have questions, comments or suggestions about this article, please feel free to write me at firstname.lastname@example.org.
About the author
Mike Chapple, CISSP, currently serves as Chief Information Officer of the Brand Institute, a Miami-based marketing consultancy. He previously worked as an information security researcher for the U.S. National Security Agency. His publishing credits include the TICSA Training Guide from Que Publishing, the CISSP Study Guide from Sybex and the upcoming SANS GSEC Prep Guide from John Wiley. He's also the About.com Guide to Databases.