Defending a network perimeter against Web-based service attacks has become increasingly complex over the past several...
years. New threats flood defenses and make it increasingly difficult to sort out the dangerous risks from the chaff. Fortunately, there are numerous tools for security professionals to build a multilayered defense.
During medieval war days, the primary goal of security was to defend the castle against attack. In order to capture the castle, intruders needed to bypass the troops at the outskirts, breach the city walls (usually defended by archers and pots of boiling oil), defeat the troops inside the walls, cross a moat and break the defenses of the castle itself. There was no single point of failure. Defending against Web security threats requires a similar strategy.
A layered approach to Web defense allows you to filter attacks through a funnel. The known attacks can be easily filtered out at the top of the funnel by security mechanisms that are most capable of handling large amounts of traffic, while the complex attacks that might penetrate one or more layers can be handled by sophisticated mechanisms midway through the protection scheme.
The outermost security layer for many networks is at the border router -- the line of demarcation between the outside world and the protected network. Modern routers are capable of including access control lists and filters that simply ignore traffic that doesn't fit a set of predetermined rules. Use the router to your advantage to defend against Web-based attacks.
First and foremost, you can configure the router to block inbound traffic on ports commonly associated with Web activity (80, 443, 8080, etc.) to any system not hosting an authorized Web server. This configuration will prevent users (inside and outside) from setting up rouge Web servers that would otherwise be vulnerable to attack. It will also help circumvent probing attacks searching for systems running non-maintained Web servers by blocking unauthorized ports.
You could also configure the router to perform access control via access control lists and other basic filters. For example, if you're seeing DOS attacks from a certain IP subnet, you could simply blacklist that subnet at the border router. The greatest advantage to using a router for the outermost layer of defense is speed -- routers work very quickly and can help prevent more complex defenses such as firewalls and content filters from being overwhelmed by this type of traffic. However, you must keep in mind that routers are not a complete security solution. They can't perform stateful inspection and should only be used for preliminary screening.
The second defensive mechanism you could implement consists of vulnerability scanners. These tools scan systems on your network (clients and servers) for known vulnerabilities that might be exploited by malicious code. On the client side, vulnerability scanners allow you to detect systems that are susceptible to exploits that deliver malicious code over the Web. They draw upon comprehensive vulnerability databases to probe systems for patterns that match the stored signatures and then report any detected vulnerability to the administrator. They're great for detecting common maladies like open SMTP relays and unpatched operating systems.
On the server side, these tools will help you keep Web server holes patched to protect against denial-of-service attacks and other threats. As with other hosts, the most commonly detected flaw with Web servers is incorrect OS patch levels. While this may seem like a simple issue, it's often overlooked and is a leading cause of system compromise. If you're in a Microsoft shop, you should definitely consider using the free Microsoft Baseline Security Analyzer. If you're looking for an opinion independent of Redmond (and also available for non-Microsoft operating systems), you might want to try the open-source Nessus scanner.
The next line of defense involves the use of application-specific filters. As with vulnerability scanners, you can use these tools in two ways: client protection and server protection. At their most basic level, client protection filters, such as SurfControl and Proventia, perform URL filtering to prevent users from browsing unwanted sites. These tools have advanced greatly over the past three years and now allow for content scanning, IM attachment filtering and other advanced techniques. Server filters, such as Microsoft's UrlScan, can perform similar filtering for inbound traffic to Web servers, looking for URL requests that correspond to known attacks or violate standards.
Another mechanism that you may consider deploying is a network-based and/or host-based intrusion-detection system (IDS). Formerly the domain of large enterprises, it's now possible to use IDSes to monitor traffic on even the smallest of networks. If you can spare a single box, you can get an open source IDS (such as Snort) up and running within hours. You might want to complement network-based sensors with host-based systems running on your most vulnerable Web servers. For mid-sized businesses Symantec's Intruder Alert, is commonly used. Of course, there are numerous security vendors willing to sell you a more expensive, enterprise solution.
When considering IDSes, it's important to remember that they aren't plug and play. You'll need to spend time tuning the systems to fit your network and ensure that the level of false positive reports (i.e., system reports of traffic as an intrusion when it's actually legitimate) doesn't overwhelm administrators to the point that they ignore future alerts.
Note: There's been a lot of talk recently about "so-called" intrusion-prevention systems. These systems take corrective action when they detect an attack by either blocking future traffic from the same source or actually "attacking back" (a frowned-upon practice). Using an IPS to block traffic offers the same benefit as blocking traffic at the border router (indeed, many IPSes work directly with networking devices) but puts the administrative burden of blocking and unblocking systems in the hands of the automated system, rather than overworked administrators. In my opinion, these tools have not yet reached their maturity and aren't suitable for deployment on production networks, mainly because it's difficult to trust an automated system to disconnect/block hosts from accessing your network. However, high-quality IPSes are available from vendors like Enterasys, McAfee and Symantec, if you're interested.
While broad in scope, this overview shows how defense-in-depth can protect your organization from Web-based service attacks with security layers that strengthen your network.
About the author
Mike Chapple, CISSP, currently serves as Chief Information Officer of the Brand Institute, a Miami-based marketing consultancy. He previously worked as an information security researcher for the U.S. National Security Agency. His publishing credits include the TICSA Training Guide from Que Publishing, the CISSP Study Guide from Sybex and the upcoming SANS GSEC Prep Guide from John Wiley. He's also the author of the About.com Guide to Databases.