Forrester Research recently published a report on application security that basically condemned blacklisting to...
the dustbin in favor of whitelisting when dealing with next-gen malware. Do you think blacklisting still serves a purpose for enterprise security moving forward, or are organizations abandoning the tactic in favor of whitelisting?
I’ve always been a fan of whitelisting and it is becoming more popular. For example, Microsoft's new Secure Boot feature is effectively a whitelist of signed drivers. Whitelisting enforces the classic security rule, "Deny all and allow only what is necessary." Whereas traditional antivirus technologies scan every file looking for known malware, whitelisting automatically blocks everything except those applications known to be trusted. This is a less resource-intensive approach and provides better protection against unknown and zero-day threats.
Signature databases -- the blacklist approach -- are becoming bloated as the amount of malware continues to explode. Keeping a blacklist up to date now requires cloud-based services that aggregate threat data from millions of endpoints, which makes maintaining a whitelist a less onerous option in the whitelisting vs. blacklisting comparison.
However, while the traditional blacklist-based approach may be losing effectiveness, there are operational challenges involved with implementing an application whitelist. A whitelist can prevent malicious or inappropriate programs from entering the enterprise, but it can stifle innovation and early adoption of the latest technologies. A correctly enforced application whitelisting policy requires the IT department to review each application that departments, managers or employees want to use to ensure that it passes performance and security thresholds, meets security policy requirements and is value for money. This entails completing a business case, a risk assessment and an ROI evaluation. Employees may also think that whitelisting gives too much control to the IT department and may try to circumvent whitelisting controls if they feel they're too restrictive.
Some application whitelisting services are built around reputation-based technology, which essentially rates software based on characteristics such as age, prevalence and digital signatures. This makes it quicker and easier for enterprises to allow access to rated applications while still blocking unrecognized software; but, like blacklisting, this approach requires a lot of effort to keep up to date.
In its report "Application Control: An Essential Endpoint Security Component," Forrester recommends the use of whitelisting as a means of supplementing traditional antivirus technologies to reduce the number of potential avenues for attack. As always, applying layers of defense is the best approach and certainly some form of whitelisting may well be essential for protecting mobile devices. They do not receive updated signatures as often as their desktop counterparts, but still have access to thousands of apps. You will still need a straightforward approval process, though, so that innovative new applications can be assessed and, if appropriate, quickly approved. This way, IT will not be seen as a business inhibitor.
Related Q&A from Michael Cobb
Expert Michael Cobb explains how an HTTP referer header affects user privacy and outlines changes that can be made to ensure sensitive data is not ...continue reading
Expert Michael Cobb explains the difference between the REESSE3+ and IDEA block ciphers and explores when each is applicable in an enterprise setting.continue reading
While cookies are critical to delivering personalized Web content, they are a privacy concern. Learn how adding Bloom filters to cookies can help ...continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.