Does risk management make sense?
We engage in risk management all the time, but it only makes sense if we do it right.
"Risk management" is just a fancy term for the cost-benefit tradeoff associated with any security decision. It's what we do when we react to fear, or try to make ourselves feel secure. It's the fight-or-flight reflex that evolved in primitive fish and remains in all vertebrates. It's instinctual, intuitive and fundamental to life, and one of the brain's primary functions.
Some have hypothesized that humans have a "risk thermostat" that tries to maintain some optimal risk level. It explains why we drive our motorcycles faster when we wear a helmet, or are more likely to take up smoking during wartime. It's our natural risk management in action.
The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008. We make systematic risk management mistakes--miscalculating the probability of rare events, reacting more to stories than data, responding to the feeling of security rather than reality, and making decisions based on irrelevant context. And that risk cockpit of ours? It's not nearly as finely tuned as we might like it to be.
Like a rabbit that responds to an oncoming car with its default predator avoidance behavior--dart left, dart right, dart left, and at the last moment jump--instead of just getting out of the way, our Stone Age intuition doesn't serve us well in a modern technological society. So when we in the security industry use the term "risk management," we don't want you to do it by trusting your gut. We want you to do risk management consciously and intelligently, to analyze the tradeoff and make the best decision.
This means balancing the costs and benefits of any security decision--buying and installing a new technology, implementing a new procedure or forgoing a common precaution. It means allocating a security budget to mitigate different risks by different amounts. It means buying insurance to transfer some risks to others. It's what businesses do, all the time, about everything. IT security has its own risk management decisions, based on the threats and the technologies.
There's never just one risk, of course, and bad risk management decisions often carry an underlying tradeoff. Terrorism policy in the U.S. is based more on politics than actual security risk, but the politicians who make these decisions are concerned about the risks of not being re-elected.
Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments' budgets, and to their careers.
You can't completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That's what companies that manage risk for a living--insurance companies, financial trading firms and arbitrageurs--try to do. They try to replace intuition with models, and hunches with mathematics.
The problem in the security world is we often lack the data to do risk management well. Technological risks are complicated and subtle. We don't know how well our network security will keep the bad guys out, and we don't know the cost to the company if we don't keep them out. And the risks change all the time, making the calculations even harder. But this doesn't mean we shouldn't try.
You can't avoid risk management; it's fundamental to business just as to life. The question is whether you're going to try to use data or whether you're going to just react based on emotions, hunches and anecdotes.
COUNTERPOINT by Marcus Ranum
Bruce, you're taking a very naturalistic--even evolutionary--view of risk management, and it's hard to disagree with something that has obviously worked for hundreds of thousands of years. The problem with any evolutionary viewpoint, however, is that we tend to sweep under the table the grim slaughter of the failures. The reason we got to where we are today (other than just plain dumb luck) is a pretty strong flight/fight reaction--in that order. As you say, our reflexes don't work in today's networks because there's no place to run--and the bad guys cheat.
It's fine to say we need to balance the costs and benefits of our decisions, but life has gotten a lot more abstract and our decisions are less visceral. If you let the guys in marketing have their way and open that port in the firewall, you might lose your job, but it's not as if the barbarians are going to force their way in and put everyone in the cubicle farm to the sword. Whenever someone says something like "a firewall is like a castle wall," I remind them that the stakes used to be different, and that's why the number of openings in a castle wall tended to be autocratically and rigidly controlled.
But that's the problem, isn't it? The stakes are moving and attitudes are not. It was one thing when a company's poor decision about a firewall rule affected its stock price; it's something completely different when you contemplate sovereignty-ending events like losing a war because too many secrets were leaked or a command/control network was compromised. I think a lot of decisions are being made based on wishful thinking rather than a clear-eyed assessment of costs and benefits.
I don't think we do a very good job of estimating costs, benefits or risk. Simple example: a company hooks SCADA systems to a wide-area network to save money, then spends many times the savings when it has to go back years later and secure it. The fact is, we're good at estimating risks right in front of us, but tend to leave long-range problems for later, when someone else who cares can deal with them. I've sat in on "risk assessment" exercises, and they generally seem to be a process whereby security practitioners try to manipulate senior management's perception by cooking up a bunch of wild guesses that multiply out to just the pretty number they think it should. You say we shouldn't "trust our gut," but that's exactly what's going on.
Once, as part of a group building command-and-control networks for war fighters, I made myself amazingly unpopular by pointing out, as a potential consequence of a network breach, that the U.S. might no longer be a world power. Everyone remembers Imperial Rome for having been eventually toppled by the outsourcers it had relied on to secure its northern borders--not for its advances in engineering or indoor plumbing.
Risk assessment numbers are cooked to make them complete-looking, cost-probable and organizationally acceptable to upper management. It's as if a bunch of medieval castellans based their wall design on the worst-case scenario of being attacked by ducks rather than barbarians.
You're right: We lack the data to do risk management well. Unlike Las Vegas, which is built on straightforward statistics, computer security is infinitely squishy because the attack vectors change every day, the target surface changes every week, and the value of what's at stake changes every second. The insurance industry tracks a lot of discrete parameters to formulate its point spreads, but in technology we're adding new parameters every day, and they're fiendishly interdependent. We'll never have the data to do risk management well unless the rate of innovation (also known as "the rate at which security gets worse") slows down. That brings its risks too.
In short, I don't think "risk management" is the correct term. We should call it something more accurate. The cargo cultists and voodoo practitioners would probably be insulted if we tried to insinuate we used their methods, so maybe we should just settle on "hand waving."