- They don't apply to me.
- They don't apply to my department.
- They are too burdensome to follow.
And it's not just companies that are guilty of this. Here are a few true security stories that prove my point:
Case 1: Simple passwords are simple passwords
This first anecdote involves a security officer at a top secret government facility. Suspecting that some employees were not abiding by the password rules for network login, I decided to run LOphtCrack, an administrative tool that can sometimes be used to find lost passwords. Lo and behold, the chief of facility was using "87654321" as his login code. When I pointed out to him that this was not acceptable, he said "It's such a simple password, nobody would guess I would use it." And when I asked him to change it, he said "No, I like it and besides, I use it for all my accounts." Those included, as he later admitted, his personal AOL logon and his ATM PIN.
Case 2: Windows and Unix don't mix
One bad practice rampant in many "secure" government systems is using Windows to maintain Unix-based systems. Take a moment to picture that. Now remember back to the Love Bug virus and reports that its malicious email messages spread to and infected a bunch of top secret systems. How on earth could that have happened?
Well, all it takes is one Windows machine to get infected (maybe someone couldn't resist connecting their Windows management console to the Internet to check their email on AOL); after that, the Unix-based systems managed from the machine act as an infection vector. Keep in mind that the Unix machines are not scanning for Windows viruses like Love Bug. Now imagine that you are tasked with fighting an infection today, and the management console that you are given is -- you guessed it -- a Windows machine infected with a rootkit or Trojan malware.
Case 3: Incident response planning/panicking
Enough government-bashing, here's how a major U.S. financial corporation, a household name, displayed a rash of worst practices when responding to a security incident. Over one weekend, someone saw unknown processes and functions active in the Web farm, raising suspicion of a potential intrusion. An incident was declared, and a conference bridge was opened. But someone in IT must have asked everyone they could think of, short of the local fire department, to get on the call.
Forty to fifty people joined the conference, resulting in mass confusion. On-hold music was a constant as the call dragged on and on and some participants put their phones aside to do other things. There was clearly no incident response plan by which to orchestrate the call, so everyone did their own analysis, most of which was incomplete, untimely or just plain wrong. One of the big consulting firms was already on site and on contract, and its team joined in the melee. After a full business day with associated labor and other costs likely in excess of $250,000, there were still no real answers.
Finally this freaked-out financial giant brought in an independent consultant. He helped close the bridge and put everyone back to work. Then he invited a small, carefully selected group of people to participate in a (secured) teleconference. The team performed some real analysis, with real assignments and deadlines. After just two hours, the leaders of this calmer, more focused approach determined that the unknown processes and functions were normal, and the unexpected behavior was typical, though not previously observed.
Case 4: IDS management -- from a laptop
Taken from the wonderful world of giant companies that do everything from make planes to manage networks, here's a network security practice to avoid: managing the IDS and firewall from a wireless laptop. Yes, you read that right. The security auditor who witnessed this wouldn't have believed it if she hadn't seen it with her own eyes.
Security auditor: "Excuse me, what are you doing with that laptop?"
Employee: "I'm monitoring the firewall and the IDS."
Auditor: "Where's the Ethernet cable?"
Employee: "No cable. It's wireless. Cool, huh?"
Examination later showed that the laptop's security status had never been verified. The OS and its apps hadn't been patched in ages. The wireless network was unsecured and broadcast its name in the open. How does this happen, you ask? Simple: someone decided it would be cooler, and easier, to maintain the IDS and the firewall from a wireless laptop than to keep walking to the consoles in the server room.
Case 5: Can't fool the card reader? Take the stairs.
And speaking of walking, here's a bad practice that is more widespread than you might imagine: proximity card readers are often installed to control access, but without a lot of thought. Result: a dangerous false sense of security.
The following was observed at a company whose business relies on the uptime of its massive storage servers: proximity cards were installed to control the elevator so that only card holders could access the floor where the server room was located. But the corridors were long, with the elevator at one end and the stairs/fire escape at the other. Employees farthest from the elevator were accustomed to using the stairs because it was quicker, and they soon realized that it saved them from carrying their prox cards around. The same, unprotected stairs allowed access out to and in from the parking garage. Besides, anyone in the building could take the elevator to a floor above or below the server room and then walk down or up via the fire escape.
Case 6: Servers and sprinklers
Just to round things out, it's worth noting that the same company moved its server room into a vacant office building without changing the fire-suppression system. That's right: the sprinklers in the server room sprinkled water, not gas. And yes, they went off and yes, they did a lot of damage to a lot of servers. You can't get much worse than that.
In each of these incidents, I've described how someone who should know better had not followed well-established best practices. Trying to save time is the common factor behind these lapses. This highlights the need for security teams to explain the importance of security policy to their organization's employees, including why it exists, what problems it solves, what actions it requires or forbids, and who or what is responsible. Then mechanisms must be put in place to ensure the security policy is followed and that everyone understands the implications and consequences of non-compliance. This approach to implementing security will help users to think twice before taking the shortcuts that lead to avoidable security incidents.
About the author:
Michael Cobb, CISSP-ISSAP is the founder and managing director of Cobweb Applications Ltd., a consultancy that offers IT training and support in data security and analysis. He co-authored the book IIS Security and has written numerous technical articles for leading IT publications. Mike is the guest instructor for several SearchSecurity.com Security Schools and, as a SearchSecurity.com site expert, answers user questions on application security and platform security.
This was first published in April 2008