Every couple of years, one high-profile vulnerability is discovered and exploited with widespread public awareness. The exposure of the Heartbleed security bug -- a flaw in some implementations of the open source OpenSSL cryptography library -- is the most recent example of a vulnerability making the mainstream press. It will likely become the vulnerability of the decade, much like the Morris worm in 1988, the Melissa worm in 1999 and...
Slammer in 2003.
All four of these news-making events were the result of vulnerabilities in widely deployed software, but each had vastly different effects. While an unauthenticated remote code-execution vulnerability such as Slammer can be just as detrimental to an enterprise as Heartbleed's exposure of encrypted content, it is critical to make the best of these situations by learning from the experience.
In this tip, I will cover the lessons learned from the Heartbleed bug and explain how these lessons can be applied in the future to improve enterprise incident response.
Lessons learned from the Heartbleed response
Many good lessons were learned the hard way from the Heartbleed security bug. Once it was determined that there were insufficient resources dedicated to maintaining and developing OpenSSL, the extended community responded immediately. This also prompted enterprises to identify other critical software that might not have sufficient resources devoted to maintaining it.
Additionally, the relatively well-coordinated, industry-wide response to Heartbleed resonated, showing that it is easier to plan for vulnerability disclosure, determine how to test for the vulnerability and prioritize remediation rather than have to perform incident response without planning.
However, one lesson that doesn't appear to have received much attention has to do with the difficulties in patching all the devices and software that were vulnerable but not included in standard enterprise-patching processes. Enterprises must learn that their standard patching and vulnerability management plans should include procedures for all systems, applications and software components in their network, regardless of whether they are part of the monthly or quarterly patching process.
Apache Web server and Oracle database updates don't come nearly as frequently as monthly patches from Microsoft and Adobe, but in cases when an urgent patch must be applied quickly, the security team should ensure that the organization is able to do so. That means keeping an updated inventory of all systems – applications, endpoints, servers and any other devices – with documentation on how to patch them while minimizing business interruption.
Also, if an enterprise has the ability to patch a high-risk vulnerability but doesn't do so, it must accept the risk that the vulnerability could be exploited on a system and potentially used to attack another one. As widespread and widely known as Heartbleed was, it's fair to say not every organization could justify patching every flaw immediately. Why not? Many organizations have limited resources, which in the security realm means making decisions based on risk. The best course of action is to prioritize high-risk flaws for remediation right away, and lesser ones later. Having a risk-based process like this in place not only helps manage unexpected major incidents like Heartbleed, but it also can be used to show management how and why certain decisions were made, giving business leaders the opportunity to have their say as well.
Applying lessons learned in the enterprise
One key lesson enterprises can take away from the Heartbleed vulnerability and put into practice is the importance of minimizing the attack surface of a system or network. Removing unnecessary or outdated software before vulnerabilities can be found and exploited will prevent future attacks. This is part of the basic system hardening process that will help reduce the amount of time required to maintain the security of the system.
The Heartbleed security bug was also used to gain access to the contents stored in memory, such as passwords, but it didn't directly result in remote code execution. Using any password acquired would require remote access to a system. So, while a system could have had passwords extracted from memory by Heartbleed, unless SSH, Remote Desktop Protocol (RDP) or some other access occurred where malicious code could be executed, additional access would not be gained. Having SSH, RDP or other direct access from the Internet setup is not common on notable systems like e-commerce Web servers because of the high-risk nature of this access. However, it is common on many systems that might fall through the cracks in an enterprise vulnerability management plan, like printers, video conferencing systems, embedded systems and any number of emerging Internet of Things-type devices.
To protect remote access connections, enterprises can block and minimize their attack surfaces with a network firewall, a host-based firewall or by outright disabling the connections on a system under attack. A network or host-based firewall could have blocked all but the required port(s) necessary to prevent the Heartbleed security bug from being exploited. Alternately, the OpenSSL heartbeat functionality that led to the Heartbleed vulnerability could have been disabled to prevent a potentially vulnerable system from being infiltrated.
On the other hand, a number of systems identified as vulnerable to Heartbleed were not the result of the primary application, but due to third-party software or hardware included in the system – most frequently this seemed to be the case with application or systems management tools. Installing third-party software patches on a timely basis would reduce the risk. Additionally, access to these applications or systems could be restricted to a secured administrative network if the software was even needed to reduce this risk. Should the risk be sufficiently high and a patch is found to be too hard to implement, application or system management software could be uninstalled if it wasn’t necessary to prevent this vulnerability.
Also, enterprises that used diverse operating systems or multiple layers of protection -- e.g., SSL load balancers or a Web application firewall -- were not vulnerable to attack. An attack dumping the full contents of memory could potentially be blocked if the attack triggered a firewall rule about a large number of network connections in a short period of time, but this could also block unexpected spikes in legitimate network traffic and cause business disruptions to an enterprise.
Following the news of the Heartbleed security bug, the IT industry has rallied around OpenSSL to devote more resources to the maintenance and the advancing development of the open source software as part of the critical infrastructure on the Internet.
Growing attention to Heartbleed has resulted in the bug being remediated on most systems, but like Slammer, it will still be with us for more than 10 years. Inevitably, new systems will be released with this vulnerability present even after the patch, and may need to be alleviated in the future. Enterprises should learn from past mistakes and do whatever possible to prevent falling victim themselves.
About the author:
Nick Lewis, CISSP, is the information security officer at Saint Louis University. Nick received his Master of Science degree in information assurance from Norwich University in 2005 and in telecommunications from Michigan State University in 2002. Prior to joining Saint Louis University in 2011, Nick worked at the University of Michigan and at Boston Children's Hospital, the primary pediatric teaching hospital of Harvard Medical School, as well as for Internet2 and Michigan State University.
Learn how to develop an incident response plan for your enterprise
This handbook reviews vulnerability management program must-haves
Uncover five tips that will help improve your threat and vulnerability management program
Dig deeper on Information Security Incident Response
Nick Lewis, Enterprise Threats asks:
What do you think the top lessons are to be learned from Heartbleed regarding incident response?
0 ResponsesJoin the Discussion