Feature

Information Security Decisions: From Dogma to Data

Ezine

This article can also be found in the Premium Editorial Download "Information Security magazine: Seven Outstanding Security Pros in 2012."

Download it now to read this article plus other related content.

As a student of information technology as well as security and management science, I often find myself looking at security issues through a “decision-oriented” lens. For the most part, these two disciplines make good bedfellows, especially when one considers that engineers dominate the information security field. Please don’t misinterpret this; I have a healthy respect for, and advocate our need for engineers (I’ve even helped teach and graduate some of them). However, not all of our problems are engineering problems, and I do believe our ability to truly manage information risk is hindered by a shortage of input from other disciplines. Although I’ve seen some improvement in this area in recent years, it hasn’t been enough.

One area where the engineering and management mindset clash is in decision making. The engineer asks, “What do I need to know to precisely formulate all factors in this decision?” Meanwhile, the management scientist asks, “What do I need to know to make a good (or at least better) decision?” In such matters, I side heavily with the management scientist.

The obvious application of this is in evaluating potential security initiatives and information security decisions: “Should we do X, Y or Z?” In most cases, it’s impossible to precisely formulate all factors in the decision, so we tend to abandon the “scientific” route and revert to other less rigorous methods of making it. This is typically some form or mixture of compliance mandate (“We have no choice but to do X”), fear response (“We can’t allow some horrible thing, so let’s do Y”), or peer following (“Everyone else is doing Z, so we should too.”) And this is where our predominantly engineering mindset hurts us. Instead, we should realize that organizations have always made decisions using varying amounts of information of varying quality. Our dilemma is not new. Valid and vetted approaches—or models—exist for structured decisions with an abundance of precise data and also for unstructured problems with sparse amounts of “fuzzy” data. These models are out there and are eagerly waiting for us to apply them to problems in our domain.

Wade Baker

Wade BakerDirector of Security Research and Intelligence

Verizon

Kudos
Primary analyst and author of Verizon’s annual Data Breach Investigations Report, which is widely cited in the information security industry.

Develops and publishes Verizon’s VERIS framework, part of an international effort to standardize security incident tracking and categorization for improved data collection, reporting, analysis and decision making.

Baker’s research for the president’s Information Technology Advisory Committee was featured in the group’s 2005 report, Cyber Security: A Crisis of Prioritization.

But I’m not going to spend the rest of my allotted space discussing models. I started my career in information security as a modeler, but I’ve become much more of a muddler over time. Rather than trying to impose existing models or beliefs on a security problem, I’ve become much more interested in exploring data to see what it might have to say about those models and beliefs. On the downside, the message is often not as loud and clear as I’d like it to be—at least in the short term—but the upside is that the bits I do manage to discern are truer than my preconceptions. With enough data to muddle through, our measurements, models, beliefs and ultimately, our decisions, will be greatly improved.

That is why I like data and pursue every little bit (pardon the pun) I can get my hands on. I think it holds more promise for the future of this industry than dogma. A lack of appropriate (or appropriately used) models is certainly a challenge to security decision making, but lack of data is a more fundamental and critical one. Whether we’re a nation, organization or individual, we do not have data of sufficient quality or quantity to create and test models, make informed decisions or take justified action to manage information risk.

Many in our field recognize this and increasingly espouse information sharing as the remedy for our data disease. I won’t disagree with that prescription: We have precious little information on our own, but together we could construct a much more complete and accurate picture of the risk landscape. The problem with data sharing, however, is that it does not happen automatically. You hear a lot more people talking about it than actually doing it. Thus, while we may have the right prescription, it doesn’t appear that we’re consistently taking our meds.

One of the advantages to being a researcher is that you get to highlight and study the problems without having to immediately advocate definitive solutions. And I’m going to hide behind that here, because the challenges of information sharing won’t be solved after you read this. But I do hope to provide some sense of direction for tomorrow. In that spirit, I have found that the three primary roadblocks to successful security information sharing are language, trust and incentives.

By language, I mean we have no commonly agreed upon taxonomy for the information we need to share. If I’m speaking apples and you’re speaking oranges, we’re not going to have a mutual understanding. Thus, a common vocabulary is essential to building a data set of sufficient size and quality to meet our decision-making needs.

Assuming we overcome the language barrier in order to create apples-to-apples data, we must still trust each other enough to share it. In an industry dedicated to secrecy and protection, this runs counter to our mindset. Finding trustworthy partners and establishing trustworthy methods of sharing with them will be key to unlocking the potential that exists.

And that potential might be the most critical piece. At the end of the day, if there is no incentive to share, we’ll never put forth the effort to develop a common language or necessary trust. We must make it clear to those responsible for day-to-day security operations what they stand to gain in terms of effectiveness and efficiency with better information. We must help executives see how they can make better decisions and justify those decisions to others.

I’m not saying it’ll be easy, but I think we can do it. One of my favorite aspects of working on the Verizon Data Breach Investigations Reports is that I get to—hopefully—demonstrate that sharing sensitive information can actually work, even across public-private and international boundaries, and that the product of sharing is beneficial to many recipients at many different levels. I view it as a small contribution toward dispelling the dogma of today and driving better decisions for tomorrow.

Information Security's 2012 Security 7 winners:

Wade Baker: Information Security Decisions: From Dogma to Data

Krishnan Chellakari: Developing a BYOD Strategy: Weigh the Risks, Challenges and Benefits

Ron Knode: Security Warrior for Cloud Transparency

Doug Powell: GRC Management and Critical Infrastructure Protection

David Seidl: Security Risk Assessment Process a Team Effort at Notre Dame

John Streufert: FISMA Compliance and the Evolution to Continuous Monitoring

Preston Wood: The new era of big data security analytics


This was first published in October 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: