Tip

DLP monitoring: Defining policies to monitor data

Accurately fingerprinting the data is the first major component in a data loss prevention (DLP) implementation and leads to the next consideration buyers should evaluate: monitoring

    Requires Free Membership to View

capabilities. The first major area of functionality that buyers should evaluate is the policy-creation engine for DLP tools. These policies will define what data to evaluate, how monitoring should occur (by scheduled scan or constant monitoring for host-based agents), what enforcement and alerting actions to take, the types of application and user access allowed for data interaction, and more.

The granularity of policy creation is one of the most important features to consider when evaluating data loss prevention tools, because these policies will ultimately be responsible for how ongoing DLP-based monitoring and enforcement occur throughout the organization.

Data at rest is usually monitored by scheduling scans that compare sensitive data types found in the scanned location with the existing fingerprinting database and comparing changes. Another option for host-based real-time assessment and alerting operates much like file integrity monitoring tools by looking for changes to file attributes. The key things to look for with data-at-rest scans are performance impacts on data storage locations and systems (particularly databases), as well as the ability to scan for sensitive and previously undiscovered data without scanning the entire data storage platform.

More on data loss prevention tools

Data loss prevention software to secure endpoints

Deploying DLP technology requires hands-on approach

Four DLP best practices for success

For data in use, monitoring rules should be linked to application and user access for specific locations and systems. For data being accessed, DLP tools will need to identify not only the content of the data or file, but also the context. For example, DLP agents or scans should be able to discern between an application running under one user context and another when accessing and processing data, because one may be perfectly legitimate while another indicates a compromise or other attack. Another key function to investigate is the movement of data, particularly for host-based data loss prevention tools. This differs from data in motion because the former typically takes place in network traffic; instead, data in motion is typically associated with attempts to print sensitive data or copy it to a USB drive on a particular system. DLP agents should be able to detect this while the data is spooled in the print buffer or while moving into the clipboard from a copy-and-paste operation.

For monitoring sensitive data in motion, data loss prevention products should be evaluated using similar criteria to other network security monitoring tools, such as intrusion detection systems and sniffing tools. The first criteria should be speed and traffic analysis capacity -- can the device keep up with the volume and complexity of traffic in required network segments? The next consideration is how the DLP product receives traffic: Some products are connected to a SPAN port, or mirror port, on enterprise switches to receive copies of packets from various ports, while other products sit in line, much like a network intrusion prevention system would.

In general, inline systems are better able to react and block traffic than passive sensors relying on TCP resets or other techniques. Another monitoring consideration will be the variety of traffic types and protocols the system natively understands -- most will process well-known protocols and traffic by default, but organizations with numerous custom or legacy systems and applications, or specific traffic types, will want to query vendors and test traffic-parsing capabilities explicitly. Some DLP products can also perform basic network behavioral baselining, allowing anomalous traffic types or volumes to be flagged as well.

Read more on choosing data loss prevention products in our guide.


About the author
Dave Shackleford is founder and principal consultant with Voodoo Security; a SANS analyst, instructor and course author; as well as a GIAC technical director. He has consulted with hundreds of organizations in the areas of security, regulatory compliance, and network architecture and engineering. He is a VMware vExpert and has extensive experience designing and configuring secure virtualized infrastructures, and is the lead author of the SANS Virtualization Security Fundamentals course. He has previously worked as chief security officer for Configuresoft; chief technology officer for the Center for Internet Security; and as a security architect, analyst and manager for several Fortune 500 companies. Additionally, Dave is the co-author of 
Hands-On Information Security from Course Technology.

This was first published in April 2013

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.