Aerial View

Vulnerability tools provide a realistic view of the enterprise, where vulnerabilities are viewed in the context of the IT landscape.

TECHKNOWLEDGE
Vulnerability management tools provide a realistic picture of the enterprise, where vulnerabilities are viewed in the context of the IT landscape.

Imagine jumping from a plane. Your view from the air is quite different than your view on the ground. And the perception of your surroundings changes as you parachute down. That's because perception is subjective, individual and often fluctuates as new factors arise. The same notion is true when it comes to securing enterprise networks.

By bringing perceptions of the network in line with reality, security practitioners can reduce the likelihood of mistakes. That's where vulnerability management (VM) comes in. Vulnerability management is an effective way for enterprises to understand their networks--without any preconceived notions.

Case in point: If the perception is that the patch process covers all critical systems, yet the reality is that the corporate e-mail server farm is unpatched, there's a good chance of serious trouble when the next big worm comes around.

By using the four essential tools of VM--asset identification, correlation, validation and remediation--VM solutions can provide a big-picture view of vulnerabilities and determine their potential impact on your network.

No single VM solution is a silver bullet. You'll need to assess your enterprise readiness and determine how VM will be used within your organization. Once that evaluation is complete and you're ready to take the plunge, here's what to look for--and what to avoid--when it comes to VM tools.

Asset Identification
You can't manage what you don't know. Chances are good there are devices on your network that are unmanaged, unmaintained and untracked. These could be machines in quality assurance or development labs, nomadic home machines, machines deliberately hidden behind a NAT device, vendor-maintained devices, or any other rogue, or unexpected device. This is where asset identification tools help. They scan the network and report details about all the devices they find--both the expected and the unexpected. These scanning tools can be either host-based, running as an agent, or network-based, using an array of sensors. They can attempt to scan without logging in (uncredentialed scan)--either by using general reconnaissance techniques (e.g, OS fingerprinting, banner enumeration) or by launching ("lite") non-detrimental scans of vulnerability exploits against the machines.

Because vulnerabilities can occur in any of the software installed on a device, the more granular the information about that device that can be obtained, the better. Specifics on the OS version, patch level, installed applications, configuration settings and assigned roles are all useful data to collect.

Be mindful of the network landscape when placing sensors or scanning equipment. Note the location of switches, routers and firewalls to make sure there aren't any dead zones. And don't forget about unusual network-aware devices such as fax machines and printers.

Correlation
Now comes the tricky stuff: understanding the relationship and connection points between the devices you find. Without this understanding, you simply have a laundry list of gadgets. But knowing how they work together can give valuable insight: for example, during an incident response exercise, this type of data can help explain how a worm is propagating, which machines are spreading it, and how it gained entry in the first place.

This is where correlation is key. By aggregating data from a variety of sources, including application logs, system logs, traps and alerts, correlation tools help administrators track relationships between devices on the network. To ensure a correct comparison, the information is then normalized, or parsed, and put into a standardized format. From there, correlation rules are applied to identify relationships and causality, thus providing a more intelligent view of the network's vulnerability.

Regulatory Compliance Dashboards--What's Realistic
Let's face it, compliance is a hassle. Nobody likes the time required to audit or the expense of documenting controls. As such, nothing sounds more appealing than a vendor offering a "point, click and comply" solution. Public companies could go with a SOX suite, banks a GLBA solution, and federal agencies could pick one or more add-ons from the FIPS series. Sounds great. But how realistic is that in practice?

Unfortunately, there are no drag-and-drop compliance solutions. The regulations don't define specific success criteria to which a vendor can write. For example, SOX requires, "an assessment...of the effectiveness of the internal control structure and procedures of the issuer for financial reporting." Not only does this requirement encompass all the systems used in the financial reporting process--everything from legacy mainframe systems to the Excel spreadsheets used in the accounting department--but it even extends outside IT. One vendor cannot realistically offer a solution that ensures the effectiveness of non-automated processes.

That's not to say that no vendor can provide compliance value, however. Those that offer systems dashboards that report on workflow, monitor system activity, or report policy violations offer tremendous compliance value. That's because once an enterprise determines what constitutes compliance in its environment and has a handle on where its ineffective processes are, streamlining and refining automation efforts are a huge win. Not only does an improvement in these areas help meet the current regulations, but it safeguards the enterprise for future regulation. As such, systems dashboards expedite audits and boost administrator confidence.

-- By Diana Kelley & Ed Moyle

A word of advice, though: Keep information about devices current, organized and centralized. A SIM/SEM tool for centralized information or alert management is a good choice for providing this functionality. These tools streamline the collection, storing and indexing of data from various network hosts such as host monitoring tools, log aggregation tools, time synchronization tools, IDS/IPS reports and policy/configuration repositories.

Validation
Not every post on Bugtraq is a reason to panic. Eight vulnerabilities, on average, are discovered daily. Trying to respond to each and every newly discovered vulnerability is a waste of time and resources, since only a small fraction of new vulnerabilities will actually apply to a given enterprise. AIX vulnerabilities aren't a concern to an AIX-free enterprise; IIS vulnerabilities aren't a problem if you're an all-Apache/Tomcat shop. Even if the vulnerability is in a deployed application or operating system, it only applies to unpatched machines or machines with a particular service enabled rather than the entire population.

How do you know which vulnerability reports apply to your environment and which do not? Validation. Validation tools confirm which devices in the network are truly vulnerable and distill the vulnerability data into a focused list to help determine which vulnerabilities merit action. Validation compares information about the vulnerability against information about the environment. If the vulnerability matches what the enterprise has deployed, the vulnerability is flagged as requiring administrator attention. If not, the vulnerability is disregarded.

Remediation
The next step is taking remedial action to keep vulnerable machines safe from threats. Specific steps will vary from vulnerability to vulnerability, but remediation typically includes applying patches, changing application or device-configuration settings, and applying filtering techniques such as firewalls, VLANs or other segmentation techniques to restrict traffic to the machine. When using remediation tools, think carefully about the level of automation that is appropriate. For example, do you perform regression testing on critical applications before deploying a potentially conflicting patch? What patch workflow requires buy-in from teams that currently maintain the process? And what about auditing? Ensuring that automated actions are audited is extremely useful during application debugging.

Look Before You Leap
Ready to buy VM tools? Before you commit, make sure you have your gear and your plan of action in order. Just like a skydiver, enterprises deploying a VM solution don't want to find out halfway down that something doesn't work.

Get buy-in from the owners of the scanned systems if you're considering automated scanning tools. There's always the chance that scanning might impact a system in an unanticipated way and lead to downtime. And the goal of VM is to minimize--not increase--downtime.

Set up governance and assign accountable owners of the data and processes in the system. Most systems within the enterprise are interdependent peers, but control of those systems is likely to be stove-piped and hierarchical. Decide ahead of time how the system will account for centralization, distribution and delegation of control. Individual business units, for example, might be resistant to a centralized system if they have limited visibility into the system or say about how decisions are made. Remember: the time to get buy-in is before any products are purchased. Without governance, clearly defined duties and buy-in, vulnerability management frameworks are just extra overhead.

Plan VM data and how it will be used from the very early stages. Decide, for example, how to handle triage in the event of an incident. A well-defined prioritization strategy is the difference between panic and strategic action when thousands of machines are at risk. Prioritize based on dollar value of the device, Internet Protocol on the device, PR value, business costs associated with downtime, or amount of time required to remediate.

Make assessments on environmental, architectural, technical and operational requirements. You need to determine, for example, whether high availability is a requirement, knowing what can be reused, how metrics can be used, what constitutes success, and who owns what portions of the system.

Remember that any type of scanning will introduce some performance overhead. In general, the more detailed the data collected, the higher the overhead will be. For example, a ping sweep has a very minimal impact on the network and the scanned hosts, but it provides very limited data. On the other hand, numerous agents that send back highly detailed information provide more granular information but have larger impact on the network and the hosts themselves. Some techniques, such as credentialed scanning and active vulnerability exploitation, can even cause systems to crash under certain conditions.

Pick a product flexible enough to accommodate the unique workflow of the enterprise. If the tools are not flexible enough to support your workflow, accountability might be assigned to the wrong group or black holes of untracked activity might appear in the audit trail. For example, if there's a process in place where OS patches are tested in the QA lab before being deployed across the enterprise, a tool sending "100 percent vulnerable" alerts while the patch is in QA will skew metrics and create useless noise.

Consider the metrics VM tools might give you. Metrics help track the security posture of the enterprise over time. You can watch, for example, the degree to which hosts maintain compliance with configuration policy, the number of vulnerable systems that exist on your network, and the amount of time required to remediate threats. Metrics are also extremely useful as a proof point in budget negotiations, allowing you to quantify the ROI of the system itself and provide hard data to back up resource allocation. Remember, though, that there's nothing magical about metrics. They provide specific information about what's happening in the environment. They also tell us what the environment is and how it changes over time. But in order to be useful, be careful to make sure what is tracked is useful and relevant to what is known.

Leverage existing technology whenever possible. This saves time and can increase the efficiency of the overall solution. Existing inventory-tracking systems can feed the initial asset map, procurement systems can keep inventory updated, and software distribution tools can provide up-to-the-minute data on what software hosts have installed. Incorporating existing workflows ensures VM stays current and maximizes data coming out of the system. For example, VM can initiate patch deployment workflows when needed, and can kick off incident response procedures at the first sign of trouble.

Set up mechanisms internally to gauge the effectiveness of the solution over time. Use realistic metrics tracks whether or not the system is performing as expected. Concentrate on metrics that provide information about what is happening in the environment, such as percentage of devices in compliance with policy or percentage of devices at optimal patch level. To help refine the system, also include metrics that reflect process efficiency such as how quickly machines are remediated. Evaluate them on a periodic basis to help drive improvements. Distill metrics down to high-level dashboard numbers to allow executive reporting.

There is no magic bullet solution that does it all. Rather, vendors favor technologies that solve individual components--asset identification, correlation, validation and remediation--of the vulnerability management equation. Each portion is crucial, though. An enterprise that doesn't know what's deployed (through asset identification) and can't track what state devices are in (through correlation) will almost certainly make mistakes in validation. Mistakes cost your business money and put it at risk. Combining multiple technologies intelligently to satisfy individual VM goals helps meet complex enterprise requirements.

The key to success with vulnerability management is starting from a strong foundation and building upward. Policies, procedures, prioritization and governance are all foundational components that VM should build on rather than replace.

This was first published in October 2005
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchConsumerization

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close