The single-most ubiquitous security appliance in any enterprise is the firewall. In almost any security architecture design it is the first line of defense, protecting the enterprise from external threats, often by blocking various types of network traffic. But, as we'll discuss in this tip, an enterprise can often learn more about its network security not from traffic that was denied, but from what was allowed in.
A firewall, in its basic form, is designed to prevent connections from untrusted networks. (Please note that we have chosen to limit our discussion of firewalls to how they apply to enterprise deployments.) It does this by inspecting the source address, destination address and intended destination port of any given connection. For purposes of this discussion, let us aggregate information on the source address, source port, destination address and destination port and treat it as a key identifying characteristic of any connection attempt monitored by the firewall. This characteristic, also called a tuple, is compared to a set of rules that outlines which connections should be explicitly permitted and which should be denied. If the tuple contains information that matches a permitted connection, then the source address is allowed to establish a connection with the destination address on the allowed port, i.e., traffic is permitted.
This is why the effectiveness of any firewall or any traffic-filtering mechanism in general is dependent on the rules with which it is configured. As the firewall polices traffic into and out of the environment it is designed to protect, it also provides a window into the source and type of traffic traversing this environment. That is why in most enterprises, it serves dual purposes: protecting the environment from threats (both from the Internet and internal sources) and as a critical investigative resource in the event that an infosec pro needs to track how something got through. Therefore, in order to be an effective security mechanism, a firewall ruleset needs to be augmented with an effective logging mechanism.
Let's discuss how firewall logging, particularly of "allow" events, can be useful for picking up on potential network security threats. An enterprise typically enforces strict protection on assets that should not be publicly accessible. These often include internal corporate systems and employee workstations. Generally, no direct inbound connection is permitted to these systems. Systems that need to be publicly accessible are hosted in an environment where the firewall protection is typically less secure. Certain services are exposed to the Internet with minimal to no protection, for example, an enterprise's Web servers (HTTP/HTTPS) or mail relays (SMTP). These systems are isolated from the internal systems in an environment called a demilitarized zone (DMZ). Filtering of outbound connections from systems within the enterprise is generally less restrictive -- allowing Web (HTTP/HTTPS) traffic -- or absent altogether.
In such three-tiered environments with strict ingress filtering into the internal systems, relaxed outbound filtering from the internal systems and open services into the DMZ segment, logging becomes critical to ensure the enterprise has visibility into traffic entering and leaving the environment. Things get tricky in high-traffic environments where logging resources are finite. Most firewall technologies that have the capability to support multiple levels of logging help to address this issue by triaging events so the most critical can be addressed first. These levels of logging are typically labeled 0 through 7 (from greatest importance to least: emergency, alert, critical, error, warning, notification, informational and debugging, in that order) with higher levels generating more information in the logs. This article does not explore each of these levels in detail, but attempts to analyze the effectiveness of having a less verbose logging level (warning) that logs only firewall "deny" actions, against the more verbose alternative (informational), which logs both firewall "deny" and "accept" actions. Each of these logging levels records the source address and port, as well as the destination address and port of any given connection.
One of the important determining factors as to the level of logging to be used is the capability of an enterprise to deal with the log information effectively. A verbose logging level might be useful in capturing all connection streams into the environment, but lacking an effective mechanism to analyze the log information could make this option less beneficial if not useless. Typically it is common practice for an enterprise to, at a minimum, enable the logging of "deny" firewall actions. This basically means that traffic that was explicitly denied by the firewall rules was observed. The interesting dilemma is how this information is useful. This information could indicate that a host was being accessed on a port that was disallowed, but the same source is allowed to access other ports. This is typically true for systems hosted in the DMZ or systems that have certain ports open to the Internet with no source restriction.
This type of behavior could indicate a benign network probe that any Internet-facing system is subjected to at all times of the day, or it could also be indicative of a targeted attack that might have been successful on the open port, but further attacks against the system are being thwarted by the firewall's ingress filters. Given that most firewall appliances are stateful (where established connections are tracked and source and destination ports are dynamically permitted), we can safely assume that the blocked connection was not part of an existing session. At this point, if the source address is determined to be persistent in its connection attempts to a disallowed port, the source could be added to an access control list (ACL), effectively blocking all connections it attempts to make. In addition, this source could also be added to an in-memory shun list to ensure any existing connections are dropped. This is fine if the source attempted the connection on a disallowed port. If the connection was made on an allowed port, logging only "deny" actions would cause us to lose visibility into this activity from network "allows."
This is where logging "allow" actions proves helpful. But before we go into the details, let's discuss how to identify a source address as a threat. Most often it could just be tracking the behavior of that source address over a period of time. If you see connections across a broad range of ports ("port knocking") over a long period of time, it could be safely assumed that the source might be attempting to profile the target and should be considered a threat. In cases where there is not much historical information locally available about the source, but other event sources, like traffic payload captured in the application logs for example, point to a potential threat, it is possible to validate the reputation of the source address against external reputation registries like those listed below. (A more complete list of source validation options is available on threats expert Lenny Zeltser's website.)
It is also possible to proactively create ACLs that block malicious sources. Some of these malicious source lists are available at:
- dshield's highly predictive blacklist (HPB)
- dshields's IP host blacklist
- dshield's recommended block list
- URL black list (limited to only resolved IP addresses)
- CYMRU's bogon list
Enabling logging of "allow" actions gives you visibility into all traffic into the environment. This is especially important since most threats target open ports rather than closed. This, coupled with the increase in advanced persistent threats where the attacker has gained access to the environment through targeted techniques like spear phishing (not a regular, network-based vector) and is using connections initiated within the network to pilfer sensitive information, makes it critical to capture these allowed connections. These connections would generally not be captured when log fidelity is set to track only "deny" actions.
The big issue with tracking "allow" events is the ability to use the logs effectively, because there is a large amount of data that is collected. There is no simple answer to this problem. Generally, increasing the logging fidelity and augmenting this with a centralized log management product can offer a searchable and alertable interface into the "allow" actions. It is also possible using simple scripts, be they shell or Perl, to parse through the logs and identify suspicious connections using some of the lists described above and run periodically, which can also be useful. There might be situations where reputation-based searches might not always be sufficient to identify suspicious traffic. In such cases, augment reputation-based searches by looking for anomalies in expected traffic patterns; for example, recording IRC traffic when expecting HTTP/HTTPS at Internet egress points can indicate a potential threat.
This is not to say logging "deny" actions is not useful. It has proven to be a good detective mechanism for outbound connections through the use of egress filtering. Egress filtering involves the use of outbound access control lists, which permit only certain types of traffic to pass through. For example, a basic egress filter could be set up to allow Web traffic (HTTP/HTTPS) outbound from user workstations. Any outbound connections that are not Web traffic would trigger a "deny" alert. For example, traffic with destination ports 6660/tcp to 6669/tcp could be indicative of IRC traffic and would be logged. In this example, the traffic could be indicative of a potential command-and-control channel to a bot herder.
There are a lot of factors to consider when deciding the level of logging that would work for you. Ideally, enterprises should log both "allow" and "deny" actions, but resource constraints simply do not allow this in many enterprises. In such situations logging of "deny" actions should be augmented by the use of egress filters as described above. In situations where logging of "allow" actions is possible, supporting processes, which would allow for the efficient and timely analysis of the logs, should be implemented. Analysis activities should include reputation-based matching, as well as monitoring for traffic deviations. Consider the use of an Intrusion Prevention System (IPS) or an outbound Web filter (such as those offered by vendors such as Websense Inc. and Blue Coat Systems Inc.) that could automate some of the analysis with real-time alerting and mitigation.
Of course, all this would be insufficient without a good firewall policy: Lock rules down to source and destinations wherever possible and relax rules only in environments designed to be Internet accessible, but even then, restrict the traffic to only that which is required -- for example, DMZs supporting only Web and mail services (HTTP/HTTPS/SMTP) -- and enforce egress filters to control traffic leaving the environment.
About the author:
Anand Sastry is a Senior Security Architect at Savvis Inc. Before joining Savvis, he worked for clients in several industries (large and mid-sized enterprises in financial, healthcare, retail and media) as a member of the security services group for a Big 4 consulting firm. He has experience in network and application penetration testing, security architecture design, wireless security, incident response and security engineering. He is currently involved with network and web application firewalls, network intrusion detection systems, malware analysis and distributed denial of service systems.