News Stay informed about the latest enterprise technology news and product updates.

Opinion: Investigating the FBI's 'invalid' security survey

Should the FBI be in the security survey business? Columnist Ira Winkler says not only does the bureau's 2005 FBI Computer Crime Survey lack statistical validity, but also could create a false perception that security technology is ineffective.


Last week, a news headline on this very site read, "FBI says attacks succeeding despite security investments." Upon reading the article, I discovered that the FBI conducted its own study and discovered -- Quick! Stop the presses! -- many companies with firewalls still experience cybercrime.

Statistics 101
For those who skipped statistics classes, here are Ira's basic guidelines to keep in mind when evaluating surveys or studies. Please consider the following issues before you put any faith into research.
  • Is the study for marketing purposes?
  • How does the study get its subjects? Blindly labeled envelopes or personal contacts with the proper individuals?
  • Does the study define how they know if the subjects have access to the right information? For example, are they asking a systems administrator the level of importance the board of directors puts in security?
  • What is the response rate? Especially when you are talking about surveys, statisticians generally agree that the people who do respond are considered to be very different than the people who don't respond. A low response rate means there is a very high margin of error.
  • Does the study measure what it intends to prove?
  • Does the study ask for opinions? For example, is an Electronic Pearl Harbor imminent? Remember, opinions are like noses, everybody has one, and are generally without basis for predicting facts, unless it is for a vote.
  • Do the study results rely on correlations, such as if you have firewalls and you have incidents, then firewalls are useless? Correlations should never be used for cause and effect.
  • Does the study list the statistics used with confidence intervals and margins of errors?"
  • Is the researcher knowledgeable in what is being studied?
  • Is the researcher biased? For example, is a researcher claiming the study justifies the need for better funding of research? Is a lobbying group claiming that the study supports their cause?
  • Are study contributors relevant to the study, and what is their role?
  • Does the study discuss its limitations? All good studies make sure to include the limitations of the sampling method, response rate, analysis and conclusions, and frequently recommend topics for future research.
  • I looked at the 2005 FBI Computer Crime Survey, which seemed pretty worthless at first glance, and then thought I should contact the FBI to see if there was more to it. After all, the survey lists prominent people like CERIAS's Gene Spafford and UC-Davis computer scientist Matt Bishop as contributors, so I thought there might have been some decent science behind it. Sadly, I think I felt a lot worse after talking to the FBI.

    Let me give some personal background. I have had significant training in statistics, with a lot of focus on examining the statistical validity of studies. I know how statistics can be manipulated and misused. For example, do you know that the early CSI/FBI studies -- often referred to as "the most quoted studies in the field," but not related to this survey -- actually stated that the findings were not statistically valid? All of the later versions of the study also weren't statistically valid; and they've never mentioned how the small response size creates such a large margin of error. Of course, these invalid results are regularly quoted in sales presentations.

    Why is statistical validity important? Without it, it means that the study design is fundamentally flawed. It means that the results are going to be used and abused by reporters lacking the know-how to challenge the numbers, and to perpetuate fear, uncertainty and doubt by salespeople and speakers at conferences.

    Also, while the report cites a list of supposedly esteemed advisors, the results were improperly portrayed. For example, the term "attacks" was broadly defined, and there was an implication that the technologies in use were meant to stop the broadly defined attacks.

    Worse yet is that the survey unfairly portrays security technologies, and will likely be used to question requests by security managers for larger budgets to acquire new security technologies. The survey states that 84% of companies experience virus incidents. At the same time 98% of companies use antivirus software. Hence, it could be interpreted that antivirus software is useless. The survey, however, failed to ask whether or not the respondents properly maintained their AV software. There are relatively similar implications with the spyware questions.

    Likewise from a survey perspective, more than 50% of the respondents were from companies with less than $5 million in annual revenue. Can the results therefore be generalized to the Fortune 2000? Of course not. Statistically, it is possible that the results can't even be generalized for companies with less than $5 million in yearly revenue. Frankly, GE or GM alone would report larger incidents than the entire study combined.

    Regardless of its validity, the survey is out there, and we have to know how to answer management concerns about the uselessness of security technology. The best way to do that is by demonstrating how much the current FBI study differs from past CSI/FBI research. For example, in the FBI study, 3% of respondents belong to FBI's InfraGard program -- the information-sharing and research effort supported by the IT industry -- while in the CSI study 32% of respondents belong to InfraGard. More relevant is that in the FBI study 23% of respondents had IDS/IPS systems, while in the CSI study 72% of companies used IDS systems and 35% of companies used IPS systems. These discrepancies demonstrate major problems in generalizing and accepting the survey results as hard facts, and most importantly, quoting the survey's results as proven facts. The fact that the current FBI survey has both a larger sampling rate and a larger number of responses than even past statistically invalid CSI/FBI studies is completely irrelevant.

    Let's face it: surveys for the security industry are created primarily for marketing purposes.  This one is doing a disservice both to the FBI and the IT community as a whole.
    To imply the uselessness of security technology, you have to understand if the technology is properly implemented and relevant to the attacks. For example, a firewall (which only 90% of the survey participants claim to have) is generally not going to have a chance of stopping pornography, spyware, viruses, hardware theft, pirated music, financial fraud, telecom fraud, etc. However, the implication is that firewalls don't stop computer crime, which is again not a valid conclusion to make. Likewise a poorly configured firewall will not even stop attacks it is capable of stopping.

    I asked the FBI why it decided to do the survey, and I learned different field offices were mandated to do so. It does not appear that they were given any resources to properly execute a study. They are relying upon random people whose names they came upon. While there are a couple notable people who advised the FBI, they apparently only used those people for implied legitimacy, not to provide a scientific foundation and a real legitimacy to the study itself.

    For example, security countermeasures that a knowledgeable practitioner (at least me) would recommend be added to the study include:

  • Security policy
  • Patch management
  • Outsourcing Web and security management services
  • At least 20 hours of security specific training for people responsible for security

    Without those, you have no idea whether those "worthless" firewalls and security software suites are being implemented properly.

    Likewise, there was no mention in the analysis that only 23% of responding companies had an IDS or IPS, and only 60% of companies kept logs (even though they never asked how frequently logs are reviewed), so the number of incidents are likely to be severely underreported. There was also no breakdown of results by organization size and industry.

    Perhaps the greatest shortfall of the entire survey is that there are no questions related to how effective security technology has been. For example, why didn't the FBI ask how many incidents had been prevented, or detected and mitigated, as a result of security technology? If the average company only loses $15,000, security technology is clearly doing something right.

    While the media may be focusing on the apparently uselessness of security technology, a number of other completely valid headlines from the study could have been:

  • Security technology keeps annual losses at $15,000
  • Only 2.3% of companies employ security personnel
  • Houston companies respond better to surveys than New York companies
  • Less than 50% of companies employ minimally adequate security technologies
  • Less than 25% of companies have a clue about security incidents that they may be experiencing
  • 92% of companies snub participation in FBI security study
  • 13% of companies have no security incidents and are shining examples to industry

    Yes, the last bullet can accurately represent the study, but I seriously hope you know that this is only a sign that the companies don't know where to look, or have nothing of any value.

    More from Ira Winkler

    The case of Shawn Carptenter: A cautionary tale

    Hacker hiring session morphs into Mitnick melee

    An execrpt from Spies Among Us: How to Stop the Spies, Terrorists, Hackers and Criminals You Don't Even Know You Encounter Every Day

    So what should you take away from the survey, or tell your executives should they stumble upon it? First, assure management that it has no valid implications for your business. Use this article for that if nothing else. Secondly, contact your congressmen and the FBI inspector general and request that if they are going to mandate local offices to perform surveys -- or any kind of research -- they should have experts doing them so they are not wasting money and creating problems for the security profession and the people we support. An improperly performed survey is much worse than no survey at all.

    To the FBI's credit, the agent in charge of the study is well meaning and acknowledged that there are likely some problems, and that the study is a work in progress. However, that doesn't negate the fact that it is grossly irresponsible to release a study that is so fundamentally flawed. This study was a gross waste of taxpayer dollars.

    Let's face it: surveys for the security industry are created primarily for marketing purposes. This one is doing a disservice both to the FBI and the IT community as a whole. It would have been valid for the FBI to try to identify ways of getting companies to report crimes, but that was not the focus of the survey. Individual field offices should be spending their time performing law enforcement, not wasting the time of the bureau's invaluable special agents.

    Ira Winkler is president of the Internet Security Advisors Group. He has over 20 years of experience in the intelligence and security fields, and has worked for the National Security Agency, and consults to a wide variety of Fortune 50 corporations. The author of Spies Among Us, he is an occasional contributor to Security Wire Perspectives.

  • Dig Deeper on Government information security management

    Start the conversation

    Send me notifications when other members comment.

    Please create a username to comment.