Why do many risk management programs fail? How do security and risk managers know they're providing value to their...
organization? For answers we've turned to Alex Hutton, currently a faculty member at IANS and the director of operations risk and governance at a major financial institution. Previously, Hutton was a principal in research and risk intelligence with Verizon Business. While there he was co-author of the Verizon Data Breach Investigation Report.
Audit doesn't necessarily care about reporting an aggregate picture of the organization's risk.
faculty member, IANS
Hutton is also a co-founder of The Society of Information Risk Analysts, and an author at the New School of Information Security blog. Hutton also contributes, or has contributed in the past, to the Cloud Security Alliance (CSA), the Open Information Security Management Maturity Model (O-ISM3), the CIS metrics project and the Open Group Security Forum.
What do you see as one of the primary reasons why risk management programs fail?
Alex Hutton: The number one way to set yourself up for failure is to copy what your audit department does. You could say that audit is concerned with where failures can occur. Risk management should be concerned with the frequency and impact of failures. Audit's role is to be consultative and help the organization understand how they can implement or adjust controls -- risk management is an economic factor: It is consultative in terms of getting the most bang for your buck in mitigating risk.
So that's why I believe most risk management programs end up failing: They end up just being yet another audit function. They end up merely enforcing policy rather than being consultative about what risk management moves make sense.
You can see this lack of differentiation between audit and risk management affects the entire industry. There is this large movement to converge the two functions. This is especially so with the big four consulting companies. They're all talking about how they can come in and make you more efficient by converging audit and risk. When you hear [from executive leadership] that this convergence starts to make a lot of sense to them, it's because you are just probably duplicating audit and your program is fundamentally flawed.
How can those in risk management tell if they have become -- or have always been -- merely an extension of the audit department?
Hutton: There are inherent similarities. Both organizations need to understand controls. Both organizations are interested in impact. But audit doesn't necessarily concern itself with the threat community. Audit doesn't necessarily care about reporting an aggregate picture of the organization's risk. They say they are very interested in aggregate risk, but if you look at how people run audit programs, how the industry standards say what you should do, rarely do you get that level of reporting that a good functional risk management program will give you.
Look at the charts in the Verizon Data Breach Investigations Report; when you look at the population of threats and their actions, the assets that they are attacking, and the impacts in terms of security attributes, you are digging into language that is completely foreign to most audit departments.
If you want to know how your program is viewed internally, ask your internal business customers for a very straightforward discussion about the differences between your program and what audit is providing. The most frank of your intra-business customers will say, "We already did this for audit. We're already doing this and this." A very frank conversation with a member of the business that you can trust, where you can ask, "How much value am I providing you over what happens there when you are audited?" and if they say, "Not so much," That's a huge indicator that you are doing it wrong.
And I think the full convergence movement of risk and audit is just a recognition that this problem is endemic in risk management programs.
You are not a fan of risk catalogues, could you explain why?
Hutton: You want to transition from risk cataloging to exposure cataloging. What most organizations do is they build this giant register of bad things that can happen. The risk register becomes the worry list of all the possible things that could go bad. The problem with a risk register is that you never know quite when to stop.
I used to work for a company that was on the flight path at Dulles. What about a jet engine dropping on the data center? Certainly something you could put into a risk register, but certainly not something that, a) is a high probability event, and b) something that you're going to spend a "bajillion" dollars to reinforce your roof so that you can withstand a jet engine dropping on you.
Organizations end up going out and doing this big kabuki dance about all the problems that could go wrong. But what if you start moving from the risk register population of all the possible bad things to asking, "What's the impact?" For example, go talk to your Exchange Server admin and ask some probing questions: talk about an event where the Exchange Server is compromised; talk about the sort of cost exposure the organization would incur; talk about how to make sure that you don't incur the worst case scenario there in terms of the distribution of losses; talk about how you may reduce the size of that loss distribution.
By cataloging that type of impact of losses with your assets will make a whole lot of more difference in the value you provide to the organization. You also don't care if the Exchange Server is out because it was attacked, or if it was shot by a laser-beam from an ancient alien astronaut who has come back to Earth after seeing a Star Trek episode in deep outer space.
What about your internal intel functions? What does that tell you about the health of your risk management program?
Hutton: That is one very quick way to tell whether you're duplicating an audit function or you have a real risk management program. How's your intel function? If risk is really the collision of four sets of information -- threat, controls, asset and impact -- and if there's a change to any of that information, such as a new threat, new controls, or a lack of efficiency in certain controls because somebody left the organization or whatever, that’s something you need to concern your program with. This could be new assets that you weren't aware of that don't go out according to security policy or are exposed, or impact perhaps a new regulatory impact. It can be anything that changes the status quo of your threats, controls, assets and impact.
If you don't have an intel function built into your risk management program then you are more like audit function than you are a modern risk management program. Think about it: How many current risk management standards really spend time describing what comprises a good intel function? How many tell you how to source intelligence? How to deal with the potential impact of that intelligence? A typical risk scenario to worry about, based on new intelligence, would be when new malware strikes OS X, and you have a population of 1,000 Macs. Now what?
Anything else you'd like to add about security and risk and how to tell if they're providing more value to the business than being an extension, or duplication, of the audit department?
Hutton: Yes. I love this exercise. It's about changing your perspective. I consider the point when you can remove the word "risk" from your vocabulary for a month you've actually achieved the Zen of good risk management.
Let's use that Exchange Server example. Someone sends malware that targets Exchange Servers. That's a risk. Most people would go on and talk about the risks: "We believe the risk is high and therefore we think that these controls mitigate that risk and that they should be put in place."
A different conversation, a more modern risk management conversation that didn't use the word "risk" would be: "The potential impact that we see in the operation of the Exchange Server from this malware are X amount of dollars, from between $10,000 and $10 million, possible impact. Those losses stem from productivity losses, replacement losses if we can't meet certain objectives. Also, there will be response costs, because we might have to pull in an incident response team. There may be privacy concerns, and we may have fines and judgments from various regulatory bodies."
If you can get through those sorts of conversations -- and if you can do that repeatedly for a month and never use the word "risk" -- you've won.
About the author:
George V. Hulme writes about security and technology from his home in Minneapolis. You can also find him tweeting about those topics on Twitter at @georgevhulme.