Tip

Three techniques for measuring information systems risk

"Are we secure" is a critical question that top executives and security professionals need to answer. Securing information systems goes beyond process, policy or regulatory compliance. It means understanding our increasing

    Requires Free Membership to View

or decreasing propensity to manage information systems risk. Measuring this risk requires us to focus on the literal meaning of risk -- the probability that an unwanted "event" will occur.

We can classify compromises in three ways: manifest risk, inherent risk and contributory risk. This allows us to measure the probability of security risk for events based on associated processes and performance, and provides a method for tracking our efforts at risk reduction.

Manifest risk is associated with an event or discrete activity that occurs in the computing environment. The common events that we can measure are flows, sessions, commands and transactions. From common events we get unwanted compromises: a breach of confidentiality, integrity, availability or liability. We know when a compromise occurs because it is self-defining; there's no way for a compromise to occur online without the occurrence of one or more corresponding events. For example, a propagating worm creates flows and commands to compromise systems. A spam message compromises the integrity of e-mail transactions. A hack attack creates flows, sessions and commands during the perpetration of the attack.

We determine manifest risk by calculating the likelihood that a common event is bad (i.e., will result in a compromise). We start by counting the total number of events, and then count the bad events. There are two ways to identify and count a bad event: first, through a known compromise, like the Slammer worm, and second, through a successful control event, like a blocked buffer-overflow attempt (after accounting for false positives). The calculation is bad events over total events normalized over time. So if 100 million daily events are common for your enterprise and you find 100 bad events, then the raw probability is one in a million. From that we can determine how long it will take to reach a million events -- it might be hours for some or years for others.

Manifest risk is the most important type of risk because it deals with actual activity. But we also incur risk simply through deploying our computing infrastructure and exposing it to the possibility of use. This is inherent risk. This risk is different because we are dealing with the unknown. Every organization has a different volume and distribution of activity throughout the environment. In the case of inherent risk, our total population set is the total number of possible event combinations. We calculate possible event combinations by determining the total number of potential sources of activity and multiplying by the total number of possible targets. For example, a possible flow (one of the four events we are counting) is the total number of source IP addresses that are aware of the environment times the total number of open ports (the targets). Transaction events are self-defined by the user population, like email in a general sense or mortgage applications in a specific one. For sessions we count the interactive user processes on a system and for command operations we count instructions to our programs.

The unique aspect of inherent risk is that it is a relative number that gets the absolute risk measurement from the corresponding manifest risk measurement. Because of this, we evaluate changes to the number relatively, in terms of percentages. For instance, reducing the number of open ports from 10,000 to 8,000 results in a reduction of 20% of the inherent risk to the environment.

Finally, there is contributory risk -- the risk associated with process. Contributory risk is measured by identifying errors and omissions in the control infrastructure. Errors are simple process flaws -- a process did not perform as anticipated or designed, such as a user account that gets created without the appropriate approval or a required patch that doesn't get applied. Omissions are more difficult to address, but point to changes made in the environment or to configuration settings that exist without being under control of a governing process

It is common in our industry to assert that "you can't quantify risk" in computing. Yet it's possible, because every aspect of computing is discrete and countable. Security is about reducing risk, and an effective process for calculating risk will help us achieve that goal.


MORE ON RISK MANAGEMENT AND IT SECURITY METRICS:

About the author
Pete Lindstrom, CISSP, is research director at Spire Security and a contributing editor for our sister publication Information Security magazine.


This was first published in February 2005

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.