It’s natural for members of a technology-centric industry to see technology as the solution to security problems....
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In a field dominated by engineers, one can often perceive engineering methods as the answer to threats that try to steal, manipulate, or degrade information resources. Unfortunately, threats do not behave like forces of nature. No equation can govern a threat’s behavior, and threats routinely innovate in order to evade and disrupt defensive measures.
Security and IT managers are slowly realizing that technology-centric defense is too easily defeated by threats of all types. Some modern defensive tools and techniques are effective against a subset of threats, but security pros in the trenches consider the “self-defending network” concept to be marketing at best and counter-productive at worst. If technology and engineering aren’t the answer to security’s woes, then what is?
To best counter targeted attacks, one must conduct counter-threat operations (CTOps). In other words, defenders must actively hunt intruders in their enterprise. These intruders can take the form of external threats who maintain persistence or internal threats who abuse their privileges. Rather than hoping defenses will repel invaders, or that breaches will be caught by passive alerting mechanisms, CTOps practitioners recognize that defeating intruders requires actively detecting and responding to them. CTOps experts then feed the lessons learned from finding and removing attackers into the software development lifecycle (SDL) and configuration and IT management processes to reduce the likelihood of future incidents.
CTOps certainly requires application of engineering and technology, but the focus remains on people. People who know how to detect and respond to intrusions are the key to fighting modern threats. The purpose of this article is to define what those people should do, as well as how you can ensure your security staff is meeting the challenge posed by modern threats.
An emphasis on CTOps should not come at the expense of measures that try to remove vulnerabilities from the enterprise. Efforts to improve software security through better coding, improved configuration, and sound business logic are the preferred way to build a sound foundation for enterprise computing. CTOps practitioners are usually very supportive of efforts to rid the enterprise of weak applications, because being a hard target frustrates intruders and reduces the overall number of intrusions that defenders must detect and handle. Therefore, CTOps encourages software security efforts that build security into applications.
JUSTIFYING COUNTER-THREAT OPERATIONS
What does it mean to conduct CTOps? I recommend either building or repositioning the enterprise computer incident response team (CIRT) as the home for CTOps. If the organization lacks a CIRT, or the CIRT doesn’t currently conduct CTOps, the first requirement is convincing management that CTOps is necessary.
No single argument for conduct CTOps or building a CIRT will likely resonate with management. Rather than relying on a single argument, CIRT builders may find one or more of the following “13 C’s” to be helpful. Incorporating these justifications into a discussion may help convince those who have budgetary and organizational authority to facilitate construction of a CTOps-capable CIRT.
1. Crisis. When the enterprise suffers a devastating security incident, managers are usually ready to take action. Although this is the worst way to justify a program because it comes after an incident, it is often very effective.
2. Compliance. Compliance requirements may contain the language necessary to construct a team. Beware applying resources in such a manner that the original CTOps mission is lost. For example, creating a team that does nothing more than monitor for configuration changes will not result in finding advanced or even moderately skilled intruders.
3. Competitiveness. My blog post “Forget ROI and Risk. Consider Competitive Advantage “ explains that preserving or enhancing competitive advantage often resonates with business people. Few people responsible for a profit and loss operation in an organization want to “lose the game.” If these decision makers can frame perception of security in terms of competition, they may understand the importance of CTOps and CIRTs.
4. Comparison. If your company security team is 10% the size of the average peer organization, it's not going to look good when you have a breach and have to justify your decisions. The blame for under-resourcing the CIRT will likely rest with the manager to whom the CIRT reports, so convince him or her to fund the operation to deflect possible future criticism.
5. Cost. It's likely that breaches are more expensive than defensive measures, but this can be difficult to capture empirically. In regulated industries one may be able to estimate the fines that could be levied against a breach victim, and the costs of funding credit monitoring services and associated legal and human resource expenses. For example, the U.S. Department of Defense recovered $1.3 million of a $5.4 million Pentagon contract from Apptis Inc. Investigators claimed Apptis “provided inadequate computer security” due to a breach in a subcontractor’s system. (Contractor Returns Money to Pentagon, Washington Times, July 25, 2009;)
6. Customers. It seems rare to find customers abandoning a company after a breach; people still shop at TJX brands. Still, you may find traction here. Compliance is supposed to protect customers but it often is insufficient.
7. Constituents. I use this term to apply to internal parties served by a central CIRT. Large companies often provide services to other business units, so a cross-company constituency may ask for help fighting intruders.
8. Controllership. A well-governed organization can often point to a centralized counter-threat center of excellence, such as a CTOps-practicing CIRT.
9. Conservation. This is a play on "green IT." What has a lower carbon footprint: 1) flying consultants all over the world to handle incidents, or 2) handling them remotely by moving data, not people? A properly resourced and equipped CIRT can rely on instrumentation that accesses data needed to analyze intrusions, rather than sending people into the field to fight fires. See my blog post “Green IT” for more details ().
10. Consolidation or Centralization. These themes are likely to enable specialization, more effective internal resource allocation, and improve defenses.
11. Confidence. Confidence applies to all parties involved. Can you trust your data?
12. Counting. Developing metrics is crucial for justifying a CIRT’s role. Managers often want to know how regularly the enterprise suffers compromises, and how quickly the CIRT can detect and respond to intrusions.
13. [Securities and Exchange] Commission. A growing number of public security voices (for example, Melissa Hathaway) advocate disclosing significant security breaches in the 10-K forms required of publicly traded companies by the SEC. Many companies already report serious intrusions, as noted in my blog post “Publicly Traded Companies Read This Blog”.
SIZING AND ORGANIZING THE CIRT
Once management believes a CIRT is necessary to conduct CTOps, the next questions involve the size of the CIRT and its structure. In order to help answer this question, I polled 12 organizations with employee counts in the low thousands to the mid hundreds of thousands. I asked each organization to count the number of people they employed to detect and respond to intrusions. Based on this survey, I determined that the average number of detection and response roles for these 12 organizations was five per 10,000 employees. In other words, if your company consists of 60,000 employees, you would likely have a CIRT with 30 people.
This 5 per 10,000 standard may sound fanciful to many readers, but consider the sorts of roles one must fulfill to be able to truly combat threats to the modern enterprise. The last CIRT that I built consisted of the following three teams:
- The Incident Response Center (IRC),responsible for the daily incident detection and response mission.
- The Security Assurance Team (SAT), responsible for Threat Intelligence and Reporting, Red Team engagements, and Technical Assistance (i.e., internal consulting).
- The Support Group,responsible for designing, building, and running infrastructure used by the IRC and SAT.
Within each CIRT sub-team, I divide responsibilities by skill level. All of these roles and experience levels will likely vary depending on the nature of the organization hosting the CIRT.
The IRC consists of these team members:
- Incident handlers are subject matter experts (8-12 years of technical experience) who use unstructured analysis tools and techniques to detect and respond to the most advanced or complicated threats.
- Incident analysts (4-8 years of technical experience) are developing as subject matter experts; they work with incident handlers to learn how to deal with advanced threats, but they also mentor event analysts.
- Event analysts (2-4 years of technical experience) are beginning their incident detection careers; they use structured analysis tools and techniques to detect and respond to well-understood threats.
The SAT consists of these team members:
- Principal analysts are subject matter experts (8-12 years experience) who understand and conduct advanced counter-intelligence work, fully simulate adversary activity, and/or lead complicated security consulting projects.
- Senior analysts (4-8 years of technical experience) are developing as subject matter experts; they work with principal analysts on larger projects while mentoring Analysts.
- Analysts (2-4 years of technical experience) demonstrate aptitude in security assurance, but are learning how to offer these services.
The Support Team consists of these team members:
- Developers write software and tools to help the IRC and SAT detect and respond to intruders.
- Architects design systems and lead major projects in conjunction with Engineers who implement tools and techniques.
- Administrators care for the systems used by the IRC and SAT, as well as infrastructure enabling the support team mission.
I did not provide estimates of experience for each role in the support team, because system administrators could have 20 years of maintaining infrastructure under their belt, whereas a very effective architect might only have 8 or 10 years of experience.
I recommend a person lead each of these three teams, with a single CIRT leader working as director of incident response. The director of IR should name one of the three team leaders as his or her deputy.
SOCs vs CIRTs
At this point it may sound like this article is basically describing a security operations center (SOC). To a certain extent the work of a SOC is pertinent to CTOps. SOC work tends to imply a more routine workflow whereby security devices generate alerts for generally well known or recognizable security violations. Analysts interpret the alerts, generate reports, and notify their constituencies. All of this work is necessary, but it is not sufficient to combat modern threats. SOC work tends to be somewhat passive, structured, and often not very creative.
In addition to performing SOC work, CTOps requires more active, unstructured, and creative thoughts and approaches. One way to characterize this more vigorous approach to detecting and responding to threats is the term “hunting.” In the mid-2000s, the Air Force popularized the term “hunter-killer” for a missions whereby teams of security experts performed “friendly force projection” on their networks. They combed through data from systems and in some cases occupied the systems themselves in order to find advanced threats. The concept of “hunting” (without the slightly more aggressive term “killing”) is now gaining ground in the civilian world.
If the SOC is characterized by a group that reviews alerts for signs of intruder action, the CIRT is recognized by the likelihood that senior analysts are taking junior analysts on “hunting trips.” A senior investigator who has discovered a novel or clever way to possibly detect intruders guides one or more junior analysts through data and systems looking for signs of the enemy. Upon validating the technique (and responding to any enemy actions), the hunting team should work to incorporate the new detection method into the repeatable processes used by SOC-type analysts. This idea of developing novel methods, testing them into the wild, and operationalizing them is the key to fighting modern adversaries.
Richard Bejtlich is the former director of incident response for General Electric, and served as principal technologist for GE's Global Infrastructure Services division.