This article can also be found in the Premium Editorial Download "Information Security magazine: Setting up for BYOD success with enterprise mobile management and mobile application security."
Download it now to read this article plus other related content.
Security expert and Information Security magazine columnist goes one-on-one with Alex Hutton, director of operations and technology risk for a Fortune 250 financial institution and formerly with the Verizon Business RISK Team.
Marcus: I find that 99 percent of the people who want to talk about metrics seem to be looking for metrics as a way of understanding what's going on around them, when it seems to me that information security metrics are a way of tracking something that you already understand. What's your experience with that? Do you get people coming to you for some catch-all metric? Why is this problem so hard?
Alex: My experience is that most metric programs suck. The reason I find that most people are disappointed in their metric programs is, as you say, most of us are measuring what we already understand. What unsuccessful programs seem to be missing is a
In this manner, we (the industry) have a chicken and egg problem. Where does one start? Inductive pursuits based on identification and collection of natural frequencies for various risk determinants, or deductive pursuits where we construct logical models that tell us where to look for meaningful information? Personally, having had experience with both (the Verizon DBIR, the FAIR model for risk analysis), I now think we need to do what Dan Geer has been suggesting - just do it all, and lots of it.
Marcus: To me, the "throw lots of it at the wall and see what sticks" approach is an admission of defeat.
Alex: I hope so. Because, IMHO, it's time to stop dorking around pretending like we're doing just fine and get serious. While we've got a couple of good resources on metrics (books, namely), they too are short on meaning. That meaning lies in the domain of the models, where we are relatively short on wisdom. That's why I continue to try to work information sharing efforts (see VERIS) even when it's politically tough to do. My thought is that someday something's going to have to give – one way or another.
Marcus: I think that "risk = threat X opportunity X ..." kind of formulas did more to hold us back than propel us forward, don't you?
Alex: I would argue that worse than silly models is the concept of premature standardization (I think that's a Geer-ism). We're running around making ISOs and NIST things and FISMA things that, though their very intelligent authors (I hope) would never state that they are perfect, are mistakenly taken as gospel for the "click and drool" generation. More bat-guano craziness has been reinforced by para-authoritative documentation than I care to think about.
We really are still listening (and many times prefer listening) to our own shamans vs. science.
Marcus: Lately I've been thinking that a big piece of this is simply that security is a vague problem (in the sense used by philosophers); we are trying to paint sharp lines around something that cannot have sharp lines painted around it. Can we actually learn anything really useful from keeping metrics and doing risk assessments? If so, what?
Alex: I would like to think so. Even with obvious sample bias to overcome, things like the Verizon DBIR have been enlightening; the work done around DataLossDB, too. And while I agree that vagueness is an accurate assessment (in that context), the modern approach to understanding complex systems might also be really useful.
What can we learn? I've been fortunate enough to be involved in efforts designed to watch for various patterns, patterns on both a "macro-information risk" scale (what is happening largely to the broader population) and "micro-information risk" (what is happening in the smaller context on a specific network) scale. But again, we have to move past what ISACA and others tell us risk is to be able to acknowledge that things like micro/macro perspectives exist. Our industry's roots in engineering and audit are very much "closed system" and Newtonian cause-effect - the "sharp lines" as you suggest. That dog don't hunt here, not in complex adaptive systems. It is a fuzzy, probabilistic world in which we operate.
Marcus: Tell me more about what you mean by macro/micro, please? It almost sounds to me like you're saying cross-industry (or cross-agency) information sharing could really pay off.
Alex: Verizon has seen patterns that are broadly applicable and patterns that are industry-specific. No doubt, this has to do with a number of factors (threat motivation, prescriptive control patterns set by standards, lack of care by small business, etc.). But that's not necessarily the whole micro/macro thing I was talking about. I'm a big fan of Myron Tribus in a man-crush hero sort of way. He dabbled in quantum physics and probability theory and is like this missing cog between all the extra-security disciplines I'd like to steal from. And he thought of economics and thermodynamics in a very similar way.
I wish I had more opinion other than "this is what I think about in the shower and after several glasses of sangria," but I think that both the attack and defend sides of the security equation have these sorts of micro/macro states to study. So OWASP, MITRE, the penetration test of the system – these are all micro-state discussions around "secure." The Verizon data, Gene Kim's Visible Ops stuff - there are macro-state observations that also help manage security and risk. Deductive lambasting of PCI would be an even higher-level macro-state discussion.
This is all a very nice way of saying not only are we concerned with how awesome specific controls are for any given attack but, big picture, how capable we are at managing, at reacting as an industry; this is all going to contribute to the risk we face.
Marcus: What do you think of the various ideas we're hearing out of DoD/NSA about sharing event traces and signatures at a macro level? Do you think we might find that there is useful trend data at the very large scale?
Alex: Depends on entropy (see? Myron Tribus!) for the signatures.
Marcus: OK, you lost me there! Can you spoon feed me a bit of it?
Alex: Well, sharing signatures is one thing. It seems obvious that this information may have immediate value ("hey, look, by adding this to my corpus of sigs, I can stop more bad guys"). But there is context around these sigs; additional information that is probably really useful: demographic of the victim, controls in place, successful prevention from source immediately previous to attack, etc. To really have a successful information sharing program, we're going to need to share and analyze this other stuff too.
Allow me to (ab)use some specific information theory terms for our conceptual use: There's this notion in probability/information theory called Shannon entropy, which is simply a means to quantify the expected value of information in a message. Now pretend with me that we could quantify the value of what the U.S. government (or any other information sharing enterprise for that matter) is collecting and distributing. The idea here is that signature alone, while valuable, may not provide maximum possible entropy for us - the Shannon entropy value just isn't as high as it could be. I believe that we need that additional context to actually reach that ideal.
Now we're back to that chicken and egg problem: How do we understand what gets us to this maximum entropy? I'd offer rapidly evolving models as the answer. But how do we get rapidly evolving models? Strong data. The sucktastic thing is the industry trend is actually away from rapidly evolving models, thanks to the premature standardization I mentioned earlier. That said, this optimist thinks we will get there, it'll just be a question of how soon. That's because I do see science as an inevitable means in any human pursuit. Why should security be any different?
Marcus: Well, science is a formalization of trial and error, so I can see what you mean. Thank you so much for taking the time to talk!
Alex: A pleasure.
This was first published in September 2012