Armin Sestic - Fotolia

Get started Bring yourself up to speed with our introductory content.

Using known-good technologies for enterprise threat detection

When it comes to threat prevention and detection in the enterprise, 'known good' technologies can be critical, yet also introduce complexity. Learn whether known-good security can bolster your security program.

Allowing good traffic to pass into a corporate environment and blocking the bad traffic are the basics of keeping an enterprise secure. However, allowing in only "known good" documents, files, links and more is easier said than done.

The truth of the matter is that attackers will inevitably move faster than enterprise defenses can adapt and block. Taking a blacklist approach to antimalware, firewalls, antispam and other security technologies has shown significant weaknesses in the last decade. And as blacklists grow longer and become integrated into more security tools, it has become increasingly difficult for security teams to coordinate every blacklist and keep them all updated.

By using known-good technology as a part of its information security plan, an enterprise can reduce its efforts in maintaining blacklists and improve the overall security in its environment.

In this tip, I will discuss how to focus on known-good technology and explain how known-good technologies can be used in an enterprise to address today's fiercest and most damaging threats.

How known-good technology works

Using known-good technology is similar to taking a whitelisting approach -- where traditionally only authorized users or applications are enabled in an enterprise -- but it takes the concept to a much more expansive and detailed level. In a traditional whitelist approach, specific executables are allowed to run on a computer, but this does not prevent someone within an enterprise network from opening a malicious file for an attacker to get their initial access for an attack. Known-good technologies would prevent the attacker from being able to execute any sort of attack in the first place.

Allow me to explain with a few more examples. Input validation is a common method of accepting only known-good input for entering data into a system. This is used in Web application or database firewalls where potentially malicious SQL statements are filtered to allow only approved SQL statements to execute.

Another example would be examining a webpage, PDF or other document, identifying potentially malicious links and then stripping out the threat and reconstituting the file with only its "known good" parts before allowing it to be downloaded. The file could be examined to identify where user-entered data -- such as text in a document -- resides and then remove the contents that could potentially include malicious code. This is a feature offered by a number of Web proxy or content gateway products from vendors such as Symantec, Blue Coat or Websense. There is also an open source tool and framework, ExeFilter, that brings this feature to files and active content and can be incorporated into other tools or used to scan file shares, email or other content.

It is entirely possible that an attacker today could compromise a legitimate business partner that your enterprise knowingly trusts and then embed malicious code in a legitimate PDF being sent by that trusted partner. By leveraging a technology like ExeFilter that removes only the malicious content of a file, an enterprise can be sure that the legitimate communication from the business would not be disrupted while the threat is removed.

Using known-good in an enterprise setting

Finding known-good threat prevention and detection technologies requires an in-depth understanding and control over the environment where malicious content could potentially be hiding, and then knowing where and when it is necessary to delete the malicious content rather than outright blocking the attachment, traffic, user, links and so on.

With firewalls -- where "deny all" and "only allow" policies are set as needed by the business -- there are significant reasons to allow connections from known-good secure networks with known secure protocols to support the policies. This could be complemented by a network access control system that allows only known-good and approved systems to use approved protocols and connect to the specifically allowed networks. While both technologies could be set up to block malicious networks, security teams would have to add each malicious network every time one is identified. The same goes for new known-good networks or protocols; as these are discovered and approved, they would also need to be added to the approved list.

Unfortunately, it might be quite difficult to extrapolate this method to all types of files or applications. To simplify the process, it may be beneficial for enterprises to focus on the most common file types or data that may be used to exploit its vulnerabilities. Additionally, defining known good in a way that would not cause a significant number of false positives that could negatively impact communications may also pose an enterprise challenge; it would require constant fine-tuning -- just like a whitelist or a blacklist -- to keep it operating effectively.

"Known good" is similar to both mandatory access control and using formal methods in software development: Mandatory access control is where access is granted to only a specific resource based on the classification of the data and the specific access granted. Formal methods, on the other hand, are used in software development to mathematically validate that software performs exactly the functions it was designed for. Both of these methods are very rigorous and resource-intensive ways to use known-good technology for improving security.

When it comes to specific known-good technologies, there are a few options enterprises can adopt. Whitelisting and graylisting products -- in which incoming traffic is temporarily rejected -- will help organizations allow only known-good actions to take place on a system or network. There are also known-good software development methods like using PHP sanitize filters or secure baseline configuration, where only known-good software or settings are used. These will reduce the attack surface to minimize the chance of a successful attack.

At this point in time, there are few options available to help enterprises scan specific files or applications for known-good components because of the difficulties in defining known good; a company might not know if all of the advanced functionality included in the plethora of file formats and applications is used by its employees, partners and customers. Starting with ExeFilter and adding other file formats to be supported could help address new file formats or applications as they are incorporated in the environment.

Additionally, while JavaScript is usually seen as high risk, it may be impossible for an enterprise to know exactly when users will need to open PDFs or other files infected with potentially malicious JavaScript. In this instance, enterprises could find a technology that converts the PDF to a static PDF -- removing the JavaScript from the file altogether. Or enterprises could require opening the PDF in a sandbox to see which potentially malicious actions are taken and then remove the malicious JavaScript from the file. Any of these options could be used to neuter a malicious PDF that is part of a phishing attack so that when your users open the file, their computers are not compromised.


Whitelisting and other known-good security approaches will bring many improvements to an enterprise to help defend it against threats to a degree that the blacklisting approach was incapable of.

However, even whitelisting will endure challenges and needs revolutionizing to keep up with the evolving threat landscape. Whitelisting is a significant improvement on the commercial security methods of the last 20 years, but if secure-by-default systems are not developed and manufactured for general enterprises and consumers, we will continue in this same spiral of whitelisting, graylisting and blacklisting.

Focusing on the known-good technologies will hopefully give enterprises, as well as the industry as a whole, an opportunity to better manage their potential risks and achieve the security needed to defend against the toughest threats today's hackers and attackers create.

About the author:
Nick Lewis, CISSP, is the former information security officer at Saint Louis University. Nick received Master of Science degrees in information assurance from Norwich University in 2005 and in telecommunications from Michigan State University in 2002. Prior to joining Saint Louis University in 2011, Nick worked at the University of Michigan and at Boston Children's Hospital, the primary pediatric teaching hospital of Harvard Medical School, as well as for Internet2 and Michigan State University.

Next Steps

Learn more about known-good security technologies including access control, authorization and security baselining, and read more about the blacklisting vs. whitelisting debate.

This was last published in November 2014

Dig Deeper on Emerging cyberattacks and threats

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Does your organization use known-good security technologies? Which ones?
My organization uses four basic but essential security technologies:
  • Risk Management Dashboard to act as the primary tool for incident response
  • Anti-malware to help protect against the threats that increase daily by watching every classical entry point-Mcafee security
  • Network anomaly detection to detect malware signatures, host intrusion and data leakage discovery—Windows Firewall.
  • Desired configuration management to monitor and maintain system configurations-Advanced System Care.
My company is looking at a vendor called Glasswall that provides whitelisting/known good analysis of email attachments and documents.  Has anyone come across this company before; and if so any pros or cons of their platform?
Interesting article. It seems to me that internet marketing promotes far too much executeable code in situations where passive data exchange would be sufficient to get users the actual relevant information. Where simple HTML or a basic PDF with a few JPG pictures can tell the user enough to make a decision whether to inquire further, salespeople employ programmers to "pretty up" even HTML emails with animations, dynamic formatting, trackbacks, cross site scripting and all manner of aggressive sales techniques to squeeze a possible buy out of a possible prospect. All this enterprise security angst just to try to sell widgets - executable code designed to bypass security systems and deliver advertising! Maybe it's time for enterprises to send a message to the internet advertising market: "we will no longer accept delivery of your web content if it contains ANY executeable code. And, if you persist, we will blacklist ALL content from your organization at our firewall." This might sound extreme, but if the sales circular in your daily paper might blow up in your face occasionally, then prudence would dictate that the threat exceeds any possible value. Why waste valuable time, money and expertise trying to figure out how to open a sales circular safely? Why execute remotely offered code on your computer just to view a fancied up webpage of text & pictures? What kind of stupid paradigm is this?