A simple Web page defacement shows the value of a thorough incident response plan.
Getting hacked is a visceral experience akin to taking a two-by-four to the head. At least, that's how I felt recently after learning via defacement mirror Zone-H that one of my Web pages had been tagged with digital graffiti.
Sure enough, our investigation found that the defaced server was running an unpatched PHP bulletin board. The hacker used a PHP exploit to leave a short, tame note marking his territory. While this was a relatively minor incident, it underscored the importance of having a prepared, intelligent incident response plan.
The adage is true: No one appreciates a policy until crunch time.
The IR plan dictated our immediate response, investigation and restoration process. With three-ring binder in hand, we went to work.
This was a fairly important server, so we had to secure and isolate it from the rest of the network. We put a rule on the perimeter firewall to drop all traffic between the server and the outside world, and then we shut down the switch port, isolating the server. Once that was completed, we paused to record the time of discovery, who discovered the hack and how it happened--all important steps for forensic analysis and possible prosecution.
The hacker didn't delete the logs, so it didn't take us long to find his multiple attempts to use a canned script against PHP. What we really wanted to know was whether the hacker gained root access or used the box as a steppingstone for attacks on other systems. A CRC check against critical files told us they hadn't been compromised and that no suspicious processes were running in memory. We checked the bulletin board vendor's site, and, sure enough, the vulnerability had been announced less than a week before the attack and a patch was available. It really didn't matter--a week was all the time the hacker needed.
We were stunned to find no evidence of the PHP attack in our IDS logs. Our IDS vendor told us that a signature would be available during the next scheduled signature update in about two weeks, which made for a three-week gap between the vulnerability's discovery and the IDS signature becoming available.
Our incident response policy dictates that any compromised machine must be rebuilt from the ground up, regardless of the incident's severity--no taking chances. The system was hardened according to generally accepted guidelines, and we scanned the box with a few vulnerability scanners to check our work. We also checked all similar machines for the vulnerable software.
We learned our lessons: Make sure that your sysadmins keep up to date with patches (once a month may not be enough) and log reviews (once a week may not be enough). Security admins must know what software is on each server so they can keep an eye out for related vulnerabilities. Don't to rely on any single IDS vendor, since signatures may arrive too late to be a first line of defense. Consider implementing Tripwire on public-facing servers to watch for attribute changes in critical files.
Finally, keep your incident response policy current. When everyone knows what to do in an emergency, the recovery goes much more smoothly.