For decades, the process of scouting and evaluating prospective Major League Baseball players has been the domain...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
of scouts, most of whom are former minor league players themselves who have spent their lives in and around the game. They spend months every year traveling from town to town, watching dozens of high school and college players in hundreds of games over thousands of innings. They keep detailed logs of each player's strengths and weaknesses, note his tendencies in specific situations and talk to coaches, teammates and teachers about whether he has any off-field problems.
And in the end, the decisions on which players teams will draft and enrich with multi-million dollar signing bonuses comes down to nothing more scientific than pure instinct. Scouts recommend players based on how they look, how a pitcher's fastball sounds as it hits the catcher's mitt, how the ball looks coming off a hitter's bat or how a player looks in his uniform. That kid just looks like a player, they'll say.
This approach has changed radically in the last few years as Ivy League graduates have taken over the front offices of many Major League teams, applying standard scientific and statistical analysis methods to the evaluation of players, rather than relying on fuzzy standards like look and feel.
Until very recently, the same approach followed by baseball lifers has been the go-to method for information security practitioners, as well. Security has been virtually the only discipline within IT to be stubbornly resistant to empirical measurement and analysis. The practice of security has been focused almost exclusively on technology and how the next new box can prevent the latest new attack. Security vendors have spent the last 15 years trying to convince IT managers and security practitioners that their product is the one that finally will solve all of the customer's security problems, but please don't look over here behind the curtain, sir. As a result, the customers can end up spending tens of thousands of dollars on products about which they know very little.
But now a small, but rapidly expanding, group of security experts, researchers, analysts and practitioners is working to change the way enterprises, governments and other organizations approach, apply and measure security. These rebels subscribe to the belief that the value and efficacy of security can—and should—be measured. And, more importantly, they believe that the entire practice of security should be approached scientifically, bringing in methods and thinking from other disciplines, including psychology and economics.
One of the chief proponents of this line of thinking is Adam Shostack, co-author, with Andrew Stewart, of an excellent new book on the topic, "The New School of Information Security." Shostack believes, as do many others, that one of the keys to improving the state of security is developing more and better data about security, including attacks and data breaches.
"We have this utter dearth of data and we've become so used to this that we don't even really realize it anymore," said Shostack. "It's understandable because you want to put attacks behind you, so people have developed this fear of sharing anything and that holds us back. We have to recognize that the data isn't being shared as widely as we'd like. Without data, we can't do the analysis. We have to start analyzing the reasons for that."
In their book, Shostack and Stewart expand upon that point, saying that the lack of data on security is preventing security practitioners from putting better analysis methods into practice and hampering their attempts to make informed decisions about strategy, architecture and policy.
"The search for objective data on information security is at the heart of the philosophy of the New School," they write. "Without objective data, we are unable to test our hypotheses. Since there is a drought of objective data today, how can we know that the conventional wisdom is the right thing to do? That is not to say that the conventional wisdom is necessarily incorrect, but as professionals we find it profoundly unsatisfying to not know the answer either way. Being unable to test hypotheses fundamentally inhibits our ability to improve."
That search for evidence, the kind of objective data that is available in virtually every other scientific field, is not an entirely new idea, but it's one that is gaining a lot of currency at the moment.
Dan Geer, who was trained as a biostatistician but has spent most of his professional life in the security industry, for years has advocated the use of hard data and cold analysis in the practice of security and he has come to the conclusion that the proliferation of security products, mixed with the lack of data from customers about security incidents has created an untenable situation.
"We've put so many products into these systems, that the complexity of the sum of the parts is part of the problem itself," he said during a speech at the Source Boston conference earlier this year.
The idea of applying the scientific method to the practice of security has been appearing in other places, as well, in various shapes and forms. Gary McGraw, chief technology officer of Cigital Inc., who has a dual Ph.D. in cognitive science and computer science, has been working in this vein for more than a decade, applying critical thinking and risk-management strategies to the problems of software security. He believes that the current shift in thinking is a logical step in the evolution of the security field.
"I really believe that the research community spearheaded it. I mean the guys in academia, like Avi Rubin, Ed Felten and those guys," McGraw said. "It was right around the mid-1990s, when we were all working on the first problems with the Web. It was the wild west, and there was plenty of room for snake oil then and it's taken this long for people to get past that. Plus, it takes about 10 years for ideas to make their way out of academia. We learned that there was no such thing as perfection and that risk management was the way that real professionals went about it. Those ideas aren't earth-shattering, but for the average IT guy or sysadmin who doesn't have access to executives, they're new ideas."
Andrew Jaquith, an analyst with the Yankee Group and former security consultant with @stake, has been studying security metrics for years and runs the ultra-informative Security Metrics mailing list and a companion conference each summer. But as valuable as this work is, it is hampered by not just the unwillingness of enterprises and other organizations to share data on attacks, ROI and other measurements, but also by the uniqueness of each organization. Measurements and strategies that are valuable in one company may be completely useless to others.
"Metrics is a great idea, but nobody is making any breakthroughs yet," McGraw said. "Relative metrics are useful in specific organizations, but that's the metrics conundrum: they're most useful when they're context-sensitive."
How and when all of this will affect the daily work of IT security teams remains to be seen. But some of the core ideas, especially the emphasis on risk management, have begun to work their way into a number of organizations, primarily in the financial services sector. As the strategies and techniques are tested and refined over time, they will work their way out into the broader IT community, which can only be a good thing.
"Every discipline has challenges, but security professionals are not only opposed to trying some of these methods, the way we are engaged with the world inhibits it," Shostack said. "The secrecy holds us back. That has to change."