The world of information security moves at a rapid pace. New threats emerge, new vulnerabilities have become a...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
constant, and new technologies are built to protect our assets. However, many of the challenges we face, as well as the technologies we rely on, are only variations and adaptations of those we've had a decade ago.
I tend to be a packrat, as well as a bit of an information security history buff, and it was no surprise when I recently came across a stack of old Information Security magazine issues in my office with dates ranging from 2001 to 2003. Taking a trip down memory lane, I flipped through several issues to see what the major topics and trends at the time were. The good news is that a number of articles are still applicable in many ways today. Unfortunately, that's also the bad news. After reading some of the articles, I was a bit depressed about the apparent lack of progress we're making in information security. Here are several examples:
- "New Directions in Intrusion Detection" (August 2001): 85 percent of respondents to a reader poll had IDS installed, and most said that it was both important and reasonably good at protecting them. At the same time, the vast majority wanted more "intelligent attack analysis" and fewer false positives. The "new directions" that followed included "meta detection" or a powerful central console (today this is largely what drives the SIM market), using hardware appliances instead of software (today this is largely the norm), and network flow mirroring for IDS (our switch backplanes can handle this now).
- "Mastering Your Own Domain" (August 2001): This article describes DNSSEC and why the Domain Name System is a security issue. Brad Johnson of SystemExperts is quoted in the article saying, "if a hacker can trick a namespace manager, he can redirect traffic without the users ever knowing it." Sounds like 2008, when researcher Dan Kaminsky revealed his now famous cache-poisoning bug. Although implementing DNSSEC is a very large and difficult task, we have known DNS has issues for many years. Kaminsky's attack was simple --why didn't someone catch that?
- "Feeling Vulnerable?" (Feb 2002): The premise of this article is that sound vulnerability management practices can help cut down on alerts. The four-step process is basic: Inventory your systems, manage the flow of information (focus on what is relevant, in other words), assess the information (evaluate the risk), and plan for response. Sound familiar? It's the same fundamental vulnerability management process we're prescribing today! Most organizations aren't doing a good job of vulnerability management, either -- many "system inventories" I encounter while consulting are outdated spreadsheets. The problem keeps getting worse from there -- focusing on vulnerabilities and managing risk is difficult for systems you don't know about. Most of the focus in the "plan for response" stage consists of patching and configuration management, and this is an area that is still woefully lacking in many organizations.
- "Practice Safe Software Coding" (September 2001): Gary McGraw and John Viega describe ten principles that still stand as best practices today. Concepts like failing securely, compartmentalizing code, using well-known crypto algorithms, and scrubbing code to remove sensitive data are all as valid today as they were in 2001 -- so why are we still having such a hard time following these recommendations?
To be fair, many of the most pressing issues in information security are complex and difficult to solve, and we've had our fair share of victories over the last decade, as well. For example, we've done a lot to curb fast-spreading worms and spam, Virtual Private Networks (VPNs) are commonplace, and use of hard drive encryption is growing. However, the major themes are the same -- secure code and coding practices are rare, signature-based IDS still needs tuning, and critical protocols and services need intense scrutiny to find weaknesses before attackers do.
Beyond just the technical issues, our greatest failure is actually more of a people problem. We are not doing a good job of conveying the severity of the issues, convincing people to change behaviors, and building security into technology transparently. Until we find better ways to convince people of security's importance, we'll likely keep fighting the same battles for a long time to come.Dave Shackleford is a consultant and also a certified SANS instructor. Send comments on this column to firstname.lastname@example.org.