In a few weeks, you'll be hearing about the results of this year's CSI/FBI Computer Crime and Security Survey. Fact: 82.397 percent of you will use this survey in your request for infosecurity funding over the next 12 months.
Actually, I just made up this statistic. Even so, it's only slightly less reliable than some of the CSI stats you'll be citing when trying to convince your CIO to spend more money on security.
Every year, the Computer Security Institute (CSI) creates this survey, with a few token questions submitted by the FBI. The survey questionnaire is mailed to several thousand people whom CSI believes are responsible for their organization's information security. To encourage a high response rate, those completing the survey are granted anonymity. In recent years, about one in six completed and returned the questionnaire. The answers are counted, graphed and published in a report that receives tremendous media attention.
Now let me explain why this process is troubling.
There's no evidence of statistical or scholarly rigor. The poorly selected pool of subjects includes unqualified respondents, such as consultants and vendors. The survey's anonymity is a marvelous device that simultaneously excuses the lack of statistical rigor and prevents any challenges to the report. One year's survey explains, "The anonymous responses provide a fascinating glimpse into the unspoken realities of cybercrime," which is then followed by several brief incident descriptions, such as, "A high-tech company with a gross income of $500 million reports a $500,000 loss due to system penetration."
Just what is the reader supposed to glean from this factlet? That every high-tech firm can expect at least one hacking incident that costs .1 percent of its annual budget? Any college professor would fail a student who performed such a study and turned in results that lacked elementary statistical analysis, contained no controls over the survey population, was missing references to other studies and contained no supportable explanations for the numbers and "trends" reported.
The creators, respondents and recipients of the study have not-so-hidden agendas. Survey respondents are asked to provide unsubstantiated estimates on the cost of computer crime, the results of which are processed and returned to those same people for use in support of their own agendas. See anything wrong with this process? The survey is purportedly published as a "public service," but a more realistic explanation is that it's a marketing tool for CSI, the FBI, enterprise security departments and infosec vendors. The FBI needs cybercriminals to justify the existence of its sexy computer crime units. Infosec officers want to increase their staff and responsibilities. Vendors use the statistics to drive sales. If you don't believe me, try doing a Web search on "CSI/FBI" to see how many security consultants and vendors cite this report as evidence of the need for their service or product.
When analyzing the results, CSI editorial director Richard Powers usually explains that the numbers are potentially misleading and that the number of respondents wasn't what CSI would have liked. He further complains about the misuse of the survey's data. These are crocodile tears: It's precisely the ability to misuse the survey results that makes it so popular.
CSI isn't the only organization to fall into this trap. Reader surveys published by Information Security and InformationWeek, among others, suffer from similar problems. But CSI's survey has a much wider circulation, and is therefore that much more dangerous.
It's clear that an accurate financial picture of infosec losses would greatly facilitate our countermeasures. Unfortunately, the problem of quantifying security incidents isn't addressed by a bunch of anonymous, biased respondents who provide "estimates" of their firms' losses to security breaches and computer crime. Don't know about you, but when I'm asked to estimate the amount of money I save my firm, I have a tendency to overestimate.
The CSI survey is popular because people are willing to accept it as evidence of a growing problem. But which is worse: Basing our decisions on a lack of information, or on misleading and often erroneous information?
Infosec practitioners claim they get no respect. We're the Rodney Dangerfields of the IT world. But it's no wonder that our CIOs and CEOs distrust us when we keep bringing them exaggerated tales about potential risks. We keep crying wolf, and we should be thankful that they listen to us at all.
About the Author: Columnist Jay Heiser works for a large European bank in London. His most recent book is Computer Forensics: Incident Response Essentials (Addison-Wesley, 2001).