Get started Bring yourself up to speed with our introductory content.

Data Breach Preparation and Response: Breaches are Certain, Impact is Not

In this excerpt from chapter five of Data Breach Preparation and Response: Breaches are Certain, Impact is Not, author Kevvie Fowler discusses the key step to contain a data breach.

Data Breach Preparation and Response

The following is an excerpt from Data Breach Preparation and Response: Breaches are Certain, Impact is Not by author Kevvie Fowler and published by Syngress. This section from chapter five explores the methods of containment after a data breach.

BREACH CONTAINMENT

The concept of the Window of Compromise was first introduced in the "Introduction" section to this chapter and will serve as the primary focus of the remainder of this chapter. Again, this window spans the time from the initial Breach, when a bad guy gains access to the target environment, to the victim containing the Breach and preventing further external communication with the attackers. To visualize this concept, think of a bank robber, if the measure of a successful bank robbery is getting away with the money and the bank robber gets trapped in the bank, he fails. Same concept here - if the attackers and any other malware or the malicious tools they brought with them are present yet their mechanism of communicating with those tools and/or exfiltrating data is disrupted or terminated, then the Breach has been successfully contained. It is important that you understand this concept and how it differs from the Window of Intrusion.

While the Window of Compromise can be achieved while the attackers and their malicious tools are still present, the Window of Intrusion cannot. The Window of Intrusion is the time frame from the initial Breach to complete eradication of the attacker. Understand that "closing the window" may (and in all likelihood will) take the form of a temporary fix that focuses on the prevention of further external communication and will not provide a long term solution. To completely remove this infiltration vector the associated vulnerabilities need to be identified and remediated, and the countermeasures tested to ensure that the attackers will not simply reenter the target using this same vector. At the conclusion of this window, the bad guys are gone, their tools are gone, the malware has been removed, and you are ready to resume business. These are two different windows with two different criteria which communicate two different things. It is important for you as the investigator to understand this, and be able to adequately explain it to a nontechnical audience. The example of the bank robbery is my favorite, and one I have used over and over in offices, Board room, and court rooms many times.

  1. Window of compromise
    1. Bank robber has broken into the bank
    2. Stolen the money
    3. Failed to make a getaway
    4. Window of intrusion
      1. Bank robber has broken into the bank
      2. Stolen the money
      3. Arrested by the police, he and his entire set of bank robbery tools have been hauled away, and possibly (under the best of circumstances) some or all of the money has been recovered

Now that you hopefully have a better understanding of the various windows present during this stage of the Breach, it's important to understand what "containment" does and doesn't mean.

What Are You Containing?

For an attacker to have gained access into the target environment he had to have taken advantage of a vulnerability or misconfiguration that provided him with that access. Potential attack vectors can include an unpatched server, a vulnerable plugin on a website, an improperly configured firewall, a default password, or human error. Whatever the case may be, something let the bad guys in and has been identified during the course of the investigation. There is where the focus of the initial containment steps should be.

It is a good practice to map out what you believe to be the Breach Breakdown in some sort of visual manner so that you can more clearly define your working hypothesis. You should also include a timeline of events that represents the chronological progression of the attack. This will be of particular interest to executives and general counsel as they prepare statements regarding what happened and when. In addition, you should also maintain a partner list of the impacted systems represented in the diagram. This list should include additional system details such as IP address, hostname, OS, system function (ie, webserver, database, workstation), and method of compromise. This diagram should depict which system the attacker initially used to gain access to the target environment, the systems that were used as he moved from the point of entry to the ultimate location of the targeted data, and the systems involved in harvesting and exfiltrating that data. In some instances, this process will be very short, as the Breach only involved a small number of systems, while in others you may have very large numbers. This is not entirely unlike Peter Chen's Entity Relationship Model, or the string models used on police television shows. Whatever the case, you will benefit greatly from maintaining this diagram and partnering list as it provides a mechanism for you to track which systems were involved in the incident, and how. Trust me, while this may sound somewhat banal, it works (Fig. 5.2).

Data Breach Preparation and Response FIG 5.2
FIG 5.2

To effectively cut off the attacker's infiltration and exfiltration vectors (how the bad guys got in, and how they either got data out, or are maintaining communications with the target) it is critical that you fully understand how the Breach took place (hence the diagram). Many organizations that have been Breached are so overwhelmed by the gravity of what just happened that they panic (understandably so) and either lose focus or fail to gain it entirely. Their actions are erratic and disjointed instead of being coordinated and tactical. Senior management or the executives would want to fix every possible vulnerability and attack vector under the assumption that doing so will help them to save face with the Board, the customers, and the court of public opinion (movement for the sake of moving). However, the reality of the situation is that by not taking the time to formulate a logical response strategy based on the nature of the vulnerabilities that were present during the attack, they themselves are becoming the primary obstacle that will prevent them from achieving the very thing they are trying to accomplish. It's also important to mention if the Breach ends up in litigation, they will want to be able to establish a defensible position of reasonableness; the foundation of which will be predicated on how they both planned to respond, and how they actually responded to the Breach. This is completely normal and to be expected especially considering the current post-Breach, litigation-infested landscape. However, this sort of "knee jerk" reaction creates the tendency to address symptoms rather than the root cause of the issues. It's haphazard, ineffective and should be strongly discouraged inasmuch as you are able to influence your organization. This is where being a seasoned investigator can be of tremendous value, having likely watched this play out literally hundreds or even thousands of times. You need to be that voice of reason and confidence. Remind them to take a deep breath, calm down, and proceed with forethought and logic.

The message that needs to be communicated is that while there may be multiple issues that have been identified, only a very small subset (usually just one or two) needs to be addressed immediately in order to contain the Breach. The focus of remediation efforts should be on these specific vulnerabilities, nothing more. That's not to say that fixing the other identified issues is not important; quite the contrary - it is vital that all of the vulnerabilities get addressed or they run the risk of being right back in the same mess in a couple of months. However, under the present circumstances, a more myopic approach is what's needed to effectively contain the incident. Additional vulnerabilities that were not part of the Breach should be noted, triaged based on criticality, prioritized, and put on a roadmap to be addressed later (so long as they are actually addressed). Many organizations actually do this during a Breach, and while I'm sure they have every intention of following through later, they become distracted by some other business driver and never actually complete the process. These are the ones that end up in the news several months later announcing that they have been Breached again.

Remediating Your Exposures

Once a thorough understanding of the components of the Breach Breakdown, the systems that were involved, and the vulnerabilities that were exploited have been established, remediation steps can begin. These steps should be prioritized based on their positioning within the network, the criticality of the vulnerability, and the complexity of the fix. Externally facing systems should be addressed first since they have the highest likelihood of being compromised again, followed by internal systems in and around the location of the targeted data. Using the diagram you presumably created that includes all of the systems involved in the Breach is a great mechanism for tracking system vulnerabilities, and determining the order of remediation. This is also true for deploying new countermeasures such as firewalls, encryption, or user-based access control mechanisms.

Data Breach Preparation and Response: Breaches are Certain, Impact is Not

Author: Kevvie Fowler

Learn more about Data Breach Preparation and Response from publisher Syngress

At checkout, use discount code PBTY25 for 25% off this and other Elsevier titles

There is a common misconception that once the vulnerabilities on the affected systems have been addressed, they are now "safe" from attackers. Well, how do you know that the fixes and countermeasures that have been deployed are having the desired impact? Short answer is, "you don't." In many of the cases I have worked when I bring this exact point up, I get the "deer in the headlights" look. So, I happily (it's sort of fun at this point in my career) repeat my question, this time a bit more slowly, emphasizing every few words, "How do you know, that the steps that you have taken to secure the affected systems are having the desired impact?". The overwhelming majority of the time, the answer has been, "We don't."

Imagine the logic in that? There is a significant data Breach that is going to be expensive to investigate, will have a presently unquantifiable negative impact on the brand and company valuation to include stock price, and may very well end up being litigated for the next few years. You figure out how the Breach took place and focus all available resources on remediating the vulnerabilities that were exploited by the attackers. But, you don't see the need to get an external team of experts to VALIDATE that those fixes are actually doing what you think they are going to do. Why? I said they would work, and they will! That's good enough, right? Yet as crazy as that sounds, it is very much a reality in organizations all over the world.

Funny how in so many other aspects of life this is simply assumed but in computer security it's like a pink fluffy unicorn dancing on a rainbow. Would you let your plumber fix a water leak without testing it to make sure the pipe is not still leaking? Would you expect your mechanic to fix the air conditioning on your car without turning it on to make sure it's blowing cold air? Would you want your doctor remove a cast from a broken bone without taking an X-ray to determine if the bone had healed properly? No, no, and no. So why in the world, after suffering an expensive, damaging, data Breach would you not expect to test the remediation steps to make sure they are functioning properly? Hint: you shouldn't.

For this process, I recommend retaining external penetration test or "Red Team" services, which we discuss further in Chapter 8, to ensure that the specific vulnerabilities exploited by the attackers have been remediated, and to confirm that any countermeasures that have been deployed are having their intended impact. There are a few reasons for my recommendation to use an external team, rather than internal resources. One, they are experts in identifying and exploiting system, configuration, and application weaknesses. They will look at your systems from the eyes of an attacker and provide you with a candid view of your security posture; something you may not be willing or able to do. Two, they are not beholden to anyone within your organization and can therefore remain unbiased. Political pressures will exist from the executives, or IT manager (likely both) to provide a "clean bill of health" so that business can resume. These pressures can also surface from individuals within the organization whose responsibility was to maintain the security of the impacted systems, who may very well be in jeopardy of losing that job. These pressures and the desire to get back to normal operations can lead to a premature or imprecise decision making that could very well do more harm to the organization than good.

An external tester that is not beholden to anyone within the organization is free (for the most part albeit not entirely) from these pressures. Three, they can also help to identify vulnerabilities that you may or may not have known about and help prioritize them based on their exploitability. Not all vulnerabilities are exploitable in the context of the current environment and its security controls. Understanding the impact of known vulnerabilities can help direct your remediation priorities. In addition, the likelihood of exploitability, due to its complexity, knowledge requirements, or the vector of attack also plays into this prioritization.

Are There More of You?

In many cases, malware and attacker tools play a significant role in a data Breach. Once initial access by the attacker has been achieved, these tools are utilized for everything from reconnaissance, privilege escalation, and lateral movement, to data harvesting and exfiltration. The good news about the presence of these utilities (if there has to be a silver lining) is that most of them have a signature and leave evidence of their existence or execution. There are some more advanced malware packages that live in memory and attack techniques that literally leave zero trace. In those cases, what is being outlined in this section would be of diminished value.

The hackers know they will probably at some point lose contact using their primary communication method with the installed malware. As such, there are typically at least one or more alternate "backup" command, control and communications mechanisms present. Some of the less technically advanced mechanisms can be as simple as installing a secondary remote access application (such as Bomgar, Logmein, VNC, or pcAnywhere), or they can be as advanced as having "phone home" activation triggers due to inactivity of the primary method. I mention this as Breach containment can't focus on the one tool, as many may exist. True containment and remediation can't occur until all potential infiltration and exfiltration vectors are analyzed and understood.

Read an excerpt

Download the PDF of chapter five in full to learn more!

At one time, taking an MD5 hash of a binary and searching for a match within a corpus of evidence such as a forensic image was considered "advanced analysis." However, as forensic methodologies, technologies, and utilities have evolved in an effort to keep pace with attack vectors, this sort of activity now falls into the category of "basic analysis." Today, we have other hashing algorithms that unlike simple hash comparisons that provide a strictly binary conclusion (the thing is either a 100% match or a 0%match), provide a percentage to which two files are similar; this is known as Content Piecewise Hashing or Fuzzy Hashing.2 Using Jessie Kornblum's SSDEEP utility, you can compute fuzzy hash values for files, set a target percentage (eg, show all files that are 80% similar), and search other systems for potential matches. This is obviously exponentially more effective when searching for files that may not be an exact match for the known sample file.

Conversely, many types of malware are modified using a runtime compression program, yielding a file that is known as being "packed". Content piecewise hashing focuses on common sections in executable binaries such as malware (in this type of situation). Malware that has been "repacked" (reencrypted or repacked using the same or other packer) will of course change the sample file. While this is still an extremely valuable investigative tool, like any other tool in your toolbox, don't rely on it 100%.

There are also several mechanisms to track indicators of compromise such as OpenIOC, STIX/TAXII, CybOX, and CRTIS which we look at further in Chapter 8. The important thing to remember here is that you understand that the incidents have the potential, and even the likelihood, of being larger than they initially appear. Be flexible with your working hypothesis and make sure that the evidence remains the primary driver rather than the other way around. Many investigators get into the bad habit of allowing their theory drive what evidence they choose to include and exclude any evidence that does not fit their theory. Conducting a comprehensive investigation is not about being right or wrong the first time. It's perfectly normal to adjust your working hypothesis multiple times prior to completing the investigation. Your job is to be thorough and tell the full story of the Breach, so check your ego at the door.

About the author:

Kevvie Fowler is a Partner and National Cyber Response Leader for KPMG Canada and has more than 19 years of IT security and forensics experience. Kevvie assists clients in identifying and protecting critical data and proactively preparing for, responding to and recovering from incidents in a manner that minimizes impact and interruption to their business. Kevvie is a globally recognized cyber security and forensics expert who in addition to author of Data Breach Preparation and Response is author of SQL Server Forensic Analysis and contributing author to several security and forensics books. He is an instructor who trains law enforcement agencies on cyber forensic and response practices. His cyber forensics research has been incorporated into formal course curriculum within industry and academic institutions including ISC2 and the University of Abertay Dundee. Credited with advancing the field of digital forensic science, Kevvie is a SANS lethal forensicator and sits on the SANS Advisory Board where he guides the direction of emerging security and forensics research. As a sought after speaker, Kevvie has engaged executive and technical audiences at leading conferences and events including Black Hat, SECTOR, OWASP and the HTCIA and is a resource to the media with features on-air and in print within leading television, news and industry publications.

 

Data Breach Preparation and Response: Breaches are Certain, Impact is Not

Reprinted with permission from Elsevier/Syngress, Copyright ©2016

This was last published in November 2016

Dig Deeper on Information Security Incident Response-Information

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly.com

Close