Patch management best practices have always dictated rigorous patch testing prior to their installation on key IT assets and mission-critical business processes. But this advice is being contradicted when it comes to urgent patches for mitigating zero-day exploits. Gartner
The danger of this fast-patch approach is if your organization installs an untested patch and it breaks a mission-critical application, the remediation effort has potentially caused more disruption to the business than the attack it was meant to prevent. Patches that cause problems are not uncommon, but surveys have shown only a small percentage of system failures have occurred due to untested patches.
This implies that, if you run a relatively up to date and standard operation that is not mission critical, but may be targeted by hackers, then you should install patches that address zero-day vulnerabilities straight away without running a test, and deal with any repercussions later. If, however, you have a legacy or aging infrastructure that has been impacted by a patch in the past – what’s known as a fragile asset – or the system is mission critical, then you should continue to run patch tests prior to installation. Changes to fragile assets should be avoided wherever possible, either until the issues that make them fragile are resolved or they can be replaced.
Thankfully, major vendors are aware of the patch burden. They have regular patch dates and run advisory services to alert you when an out-of-band patch will be released so organizations can be prepared. Also, the cost of running a virtual test lab has fallen sharply over the last few years, so many production systems can be duplicated in a test lab where the effects of a patch can be reliably tested and appraised.
It’s always worthwhile to check whether a zero-day vulnerability truly affects your system. Many of the recent serious Adobe vulnerabilities would be of no concern to an application that created, say, PDF invoices, but never opened a PDF document. Configuration settings, other security controls, or an unreliable or impractical exploit may mean the vulnerability can’t be exploited on your particular system. One way to check this is to make use of the Metasploit Framework to test your system. Knowing if a vulnerability presents a risk to your network is extremely useful information as it allows you to prioritize patches, upgrades, and firewall and IDS changes.
Another option you have is virtual patching. The aim of virtual patching is to alter or eliminate the vulnerability by controlling either the inputs into or outputs from the affected application. It can be an extremely valuable technique to reduce the risk created by intervals between vendor patches or for companies unable to test and roll out patches on a regular basis.
For the majority of administrators and IT organizations, it is more important to implement patches quickly than to thoroughly test them. However, for others, the time taken to test and discover potential conflicts with existing configurations and ensure a predictable rollout is still worth the risk of running an exposed system. In such cases, a virtual test lab is essential. This way, virtual patches can be created and tools like Metasploit can be used to ascertain the real risk so more thorough testing can be undertaken without the system being left exposed.
Ultimately, there’s certainly no one-size-fits-all patch management process. A lot depends on the criticality of individual systems and the resources available within the IT department to handle day-to-day maintenance.
About the author:
Michael Cobb, CISSP-ISSAP, is a renowned security author with more than 15 years of experience in the IT industry. He is the founder and managing director of Cobweb Applications, a consultancy that provides data security services delivering ISO 27001 solutions. He co-authored the book IIS Security and has written numerous technical articles for leading IT publications. Cobb serves as SearchSecurity.com’s contributing expert for application and platform security topics, and has been a featured guest instructor for several of SearchSecurity.com’s Security School lessons.
This was first published in October 2011