The following is an excerpt from Measuring and Managing Information Risk by authors Jack Freund and Jack Jones and published by Syngress. This section from chapter 13 explores information security metrics.
Would you believe us if we told you there was one metric, and only one, that would tell you everything you needed to know about an organization's information security risk posture? No, probably not, and you'd be right. That said, the number of metrics required to gain a meaningful understanding of an organization's risk posture is not hundreds, or even dozens. Not, at least, if you understand the key elements that drive risk into an organization.
A comprehensive coverage of metrics isn't possible in a single chapter. Therefore, our focus here is on understanding where and how metrics fit into an information security risk management program and how to leverage them effectively. To provide this understanding, we'll share a framework you can use for identifying meaningful metrics and figuring out their value proposition. Of course, we'll provide examples, both in this chapter and the next one. Our examples will not be exhaustive, though. If you're looking for exhaustive examples, there are a lot of books on the market dedicated to information security metrics.
Speaking of other books, if you're looking for a very good book dedicated to the subject of information security metrics, we really like IT Security Metrics by Lance Hayden. Hayden goes into significant detail on the nature of data, statistics, and analysis. For the data geeks in the crowd, we also really like another book entitled Data-Driven Security: Analysis, Visualization, and Dashboards by Jay Jacobs and Bob Rudis. We would like to think that the concepts and frameworks presented in this book, and especially this chapter, will allow you to better leverage the wealth of information in those books.
Talking About Risk
The word "data" can take a plural or singular form (e.g., "data are" or "data is"). Scientists and other quants often prefer the plural form, while much of the rest of humanity seems to prefer the singular form. Fortunately, the word's meaning does not change based on the form that is used, nor is there confusion about what it means. For those reasons, we didn't sweat it. We hope you wouldn't either, at least while reading this book.
Current State of Affairs
The use of metrics promises to take our profession from art to science (or at least to something less superficial and more science-like). In order to realize that promise, however, our profession has to solve a few fundamental problems first -- problems we have beaten a drum about throughout this book. For example, without consistent and logical nomenclature, it becomes wickedly hard to normalize data or communicate effectively. After all, if one person's "threat" is another person's "risk" is another person's "vulnerability," it is extremely difficult to find common ground. How do you know what data you need in the first place, and how do you apply data to derive meaningful results if your "models" look anything like "Threat x Vulnerability/ Controls," or are simply checklists? Finally, the only way your metrics become meaningful is if they support explicitly defined objectives that matter. In this chapter, we continue the process of tying together what has been covered in the earlier chapters -- nomenclature, models, and objectives -- so that you can leverage metrics more effectively.
Talking About Risk
It's been our experience that information security organizations can often get away with having relatively useless (or worse, misleading) metrics. On numerous occasions, we have seen auditors, regulators, executives, and third-party assessors apparently attribute program maturity and effectiveness to a bunch of colorful charts and graphs, even when the metrics are either misleading or go entirely unused in decision-making.
Talking About Risk
We have heard the statement on more than one occasion that an important criterion for a "good" metric is that the data should be easy to acquire. Yes, it's great when data acquisition is easy. but if you rely on that to drive which metrics you use, you may miss out on really important information. All we're saying is don't just rely on the easy stuff. Understand the decision you are trying to support and get the best information you can, given your time and resources.
Metric Value Proposition
Remember what we said in the Risk Management chapter; that risk management boils down to a series of decisions and the execution of those decisions? Well, this entire chapter could perhaps be entitled "Decision Support" because the only reason for generating metrics is to inform decisions. In fact, if you're publishing metrics that aren't being actively used in decision-making, then you are wasting time and resources. Because of this, we're going to come at metrics with a clear eye toward their role in decision support. Behind every decision there are one or more goals that an organization is driving toward. Before we go on, ask yourself what overarching goal might form the foundation for decisions within a risk management program. We'll answer the question shortly, but here is a hint—we discussed it in the Risk Management chapter.
Within the metrics world, you may have heard people talk about the "Goal, Question, Metric" (GQM) method for developing good metrics. We really like this approach because it helps people focus on and understand a metric's value proposition. For example, the GQM approach might go about defining a metric in the following way:
- Goal: Reduce the number of network shares containing sensitive information
- Question: How much sensitive information resides on network shares?
- Metric: Volume of sensitive information on network shares
This is a clear and concise way to define that kind of metric. However, you have to be a bit careful to not put the cart before the horse. The above example suggests that a decision had already been made regarding a different question. Perhaps that different question was, "Do we need to reduce the volume of sensitive information on our network shares?" (Apparently, the answer was "yes"). There may have been a question before that; something like, "Do we have significant concentrations of risk associated with sensitive information?" (Again, apparently the answer was "yes"). Absent the context of those questions and their subsequent decisions, chasing a metric like the volume of sensitive information on network shares might not be a good use of time even given a great metric definition method like GQM. We have to define the big picture --those "macro goals" -- first.
Measuring and Managing Information Risk
Authors: Jack Freund and Jack Jones
Learn more about Measuring and Managing Information Risk from publisher Syngress
At checkout, use discount code PBTY25 for 25% off this and other Elsevier titles
In keeping with our decision-based focus, we would like to make a fairly subtle but important observation about the question component of GQM. In the above example related to network shares, the original question was phrased as "how much," yet the implied questions that might have come before were phrased differently. The "Do we need…?" and "Do we have…?" phrasing is more explicitly aligned with decision-making because, depending on the answers, different actions may be required. The question of "how much" doesn't explicitly relate to a decision or goal. Implicitly, perhaps, but it's important that we understand the decision context for the metric as explicitly as possible.
Before we dive into the section on how to leverage GQM to make metrics meaningful, there is one more thing to point out -- comparison. Specifically, metrics are fundamentally a means of making comparisons between, for example:
- Current conditions and desired future conditions
- Risk scenarios (prioritization)
- Mitigation options (selection)
- Past conditions and current conditions (efficacy of past decisions and actions)
You may have noticed that this also aligns with the risk management stack -- meaningful measurements enable effective comparisons, which enable well-informed decisions. We love it when things come together like this.
Beginning with the end in mind
So, did you come up with any ideas regarding our question about an overarching GQM-type goal for metrics? As you'll recall from the Risk Management chapter, our definition for risk management includes the phrase, "…cost-effectively achieve and maintain an acceptable level of loss exposure." That sounds suspiciously like a goal to us, so you get a diamond encrusted platinum star if that's what you came up with. With that goal as our starting point, let's continue to break this down and apply the GQM approach for our metrics. We can begin by breaking our overarching goal into four subgoals:
- Being cost-effective
- Achieving alignment with the organization's risk appetite
- Maintaining alignment with the organization's risk appetite
- Defining the organization's risk appetite (that "acceptable level of loss exposure")
The next step is to break these down into more granular subgoals.
Breaking it down
In this section, we'll begin to break down our overarching goal into layers of subgoals, questions, and metrics. We'll wait to discuss these subgoals until a little further on though, because some of the discussion will be lengthy. For now, let's just cover the outline.
In the mind-map shown in Figure 13.1, we have broken out our four main subgoals into another layer of granularity. Once you have a handle on this layer of abstraction, we think you will find it is pretty easy to figure out additional layers of subgoals, questions for these goals, and then the metrics that inform those questions and their associated decisions.
Before we move on, there are a few things we need to point out about the framework above:
- This framework may not be the only way to decompose the risk management goal. You may find that a different set of subgoals, questions, and metrics works better for you, or you might find that this framework provides most of what you need, only requiring a handful of your own tweaks.
- Efforts for achieving and maintaining will likely (should likely) run in parallel. If you aren't tackling the root causes for variance and unacceptable loss exposure (primarily a part of the "maintain" function) even as you are mitigating current exposure, then your progress in achieving an acceptable level of risk is likely to be much slower. This is because even as you fix things, bad risk management practices will be introducing risk elsewhere.
- Even if your organization achieves its desired level of risk, the dynamic nature of the risk landscape is undoubtedly going to throw occasional curveballs that take the organization out of that comfort zone. Also, keep in mind that in some cases, management's comfort zone may shift. Either way, you should view this as a never-ending process. However, that has always been a mantra of information security and risk management, right?
Read an excerpt
Download the PDF of chapter 13 in full to learn more!
The bottom line is that the value of any metric should be defined within the context of a goal. When it comes to information security, that goal is to manage risk cost-effectively over time through better-informed decisions. Starting with that as the focus, the rest is easy; or at least easier.
About the author:
Dr. Jack Freund is an expert in IT risk management, specializing in analyzing and communicating complex IT risk scenarios in plain language to business executives. Jack has been conducting quantitative information risk modeling since 2007. He currently leads a team of risk analysts at TIAA-CREF. Jack has more than 15 years in IT and technology consulting for organizations such as Nationwide Insurance, CVS/Caremark, Lucent Technologies, Sony Ericsson, AEP, Wendy's International, and The State of Ohio.
Jack Jones, CISM, CISA, CRISC, CISSP, is co-founder and president of CXOWARE, Inc. He has been employed in technology for the past thirty years, and has specialized in information security and risk management for twenty-four years. During this time, Jack has worked in the United States military, government intelligence, consulting, as well as the financial and insurance industries. He has more than nine years of experience as a CISO with three different companies, with five of those years at a Fortune 100 financial services company. Jack's work there was recognized in 2006 when he received the 2006 ISSA Excellence in the Field of Security Practices award at that year’s RSA conference.