For many years, organizations have devoted significant resources toward buying and implementing new security products to protect against cyberattacks, but not nearly as much on breach detection. Meanwhile, many organizations have had to focus IT and specifically information security resources on meeting compliance regulations, which has also resulted in fewer resources devoted toward detection.
The result of this long-term collective negligence by enterprises toward breach detection, a vital infosec function, can be seen in the 2013 Verizon Data Breach Investigations Report (DBIR), which highlighted that almost seven of every ten breaches are detected by a third party, not the breached organization.
That statistic seems bad enough, but now consider that Verizon also found that when breaches were detected internally, they were most often found by regular users -- not IT or security specialists. It would definitely seem there is a broad problem with the people, processes and technology used in enterprises for incident detection.
Breach detection: Why is it so hard?
Detecting incidents in large enterprises is often difficult, given the size of such organizations and the number of devices in use. Defining, searching for and identifying unauthorized activity is, as the saying goes, like searching for a needle in a haystack. The number of potential targets may be greatly reduced in smaller organizations, but they lack the manpower and resources to devote to detection.
From the Editors: More on incident response planning
Find out how to comply with updated NIST incident response guidelines.
Author Neil McCarthy discusses how to create a more effective incident response plan in this SearchSecurity.com podcast.
The Red October malware campaign serves as a great example of why it is difficult for enterprises to detect increasingly sophisticated breaches. As part of the campaign, attackers would simply infiltrate organizations via phishing attacks and then exploit vulnerabilities in Java, Microsoft Office and the like. Once inside the organization, the attackers would look to gain access to the credentials of authorized users that could be used to cover their actions. Using these techniques, they were able to stay inside certain organizations for years, stealing sensitive data while remaining undetected. For organizations with expanded attack surfaces and/or small budgets, finding the equivalent of Red October could be very tricky indeed.
Also, keep in mind that many of the reported incidents in the Verizon DBIR that were detected by third parties could have been prevented by properly implementing one of the PCI DSS security controls or could have been detected by more closely monitoring systems. IT teams may simply be focusing on the wrong areas, or perhaps budget or staffing constraints limit their ability to handle anything other than low-hanging fruit. While detecting an incident as opposed to detecting Internet hosts port scanning a network or a system getting infected with non-targeted malware is more difficult, enterprises and security pros must keep in mind that it is also a much more valuable task and one worthy of increased effort.
Improve enterprise network monitoring to detect breaches earlier
There are obviously many reasons why enterprises are falling down when it comes to breach detection, which means there will be no silver bullet to solve this issue. Instead, organizations must implement a variety of security controls.
As part of the DBIR, Verizon recommended using the SANS Institute's 20 Critical Security Controls, but as the report mentions, these are well known security controls that enterprises should have in place anyway. The SANS controls can help you use existing tools more effectively to detect incidents. For example, implementing configuration monitoring and management, including file integrity checking, could help detect the deviations required to take a foothold in an enterprise's network. Systems can also be set up in almost the same manner as read-only files, where the only writeable locations are on a network device; such a configuration would help make tuning the file integrity checking analyzing the data much easier because there would be few legitimate changes logged. Monitoring for all processes started on a system and investigating executables run for the first time on a system might also identify an attack in progress.
Network monitoring of NetFlow data and full packet analysis could help identify suspicious network connections for later investigation. Such monitoring can use anomaly detection to identify new external systems where significant data is sent for investigation. Network monitoring can also help enterprises identify other possible indicators of a breach, including the following: rogue wireless access points, unauthorized Internet connections, rogue dial-up connections, connections to other organizations, third-party service providers (including cloud providers), unauthorized VPN connections, other encrypted connections and other external communications that might be suspicious and require investigation. It's also possible to monitor for known malicious IPs.
One of the next steps is to start tracking security incidents. The depth and details to track for each incident may vary depending on a particular organization, but utilizing one of the existing incident information sharing frameworks is a good start. Once the data collection process is underway, the data from incidents that weren't internally detected can be analyzed to determine why they weren't detected. This could be part of the root cause analysis to determine what security control(s) failed and how to prevent a vulnerability from being exploited in the future. As an organization polishes its incident response process and expands the data collected as part of a response, new controls can be identified that could have detected or prevented the incident.
Organizations with more stringent security requirements should be devoting significant resources toward a person (or possibly even a team) dedicated solely to incident response. This person would be freed from any daily monitoring responsibilities to focus on incident response, analyze incidents data and identify security controls that could have prevented, minimized or reduced the detection time for the incident. For other potential defense options, organizations can look to deploy tactics similar to those used for APT-style attack detection, which requires careful monitoring of an organization's network and systems. Verizon, for example, has included more espionage-related incidents in its DBIR data set partially "due to the effectiveness of monitoring IOCs for state-affiliated groups," which supports using IOCs on enterprise networks. There are differences between so-called APT attacks and the more common attacks analyzed for the DBIR, but these differences are diminishing. To perform such monitoring, an enterprise can check its systems for the same IOCs that Mandiant identified in its APT1 report, from data received from an information sharing and analysis center (ISAC) or other trusted organizations.
By introducing several new layers of monitoring, though, user privacy might take a significant impact, so organizations should notify users that their activity is being monitored and ensure that the appropriate steps are taken to protect user privacy. An organization may not want to provide specific details concerning monitoring targets, so an attacker would require more effort to determine what exactly is being monitored. Securing the collected user data should also be made a priority, perhaps by briefing executive management on the status of any monitoring efforts and how privacy is being protected.
As adversaries have improved, security incident detection methods have needed to keep pace even when incident prevention capabilities have not. Enterprises can increase the resources devoted to incident detection and identify the most effective controls to detect and prevent future incidents. It's abundantly clear, though, that continuing to follow only standard compliance requirements is not sufficient to adequately protect enterprises from advanced attackers.
About the author:
Nick Lewis, CISSP, is the information security officer at Saint Louis University. Nick received his master of science in information assurance from Norwich University in 2005, and in telecommunications from Michigan State University in 2002. Prior to joining Saint Louis University in 2011, Nick worked at the University of Michigan and at Boston Children's Hospital, the primary pediatric teaching hospital of Harvard Medical School, as well as for Internet2 and Michigan State University.