Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Pitfalls of security layers (and how to avoid them)

Overtaxed and mismatched security layers can affect many aspects of the enterprise, which can have long-lasting negative effects.

From the earliest stages of their careers, most IT security practitioners are taught about the practical benefits of "layered security" and "defense-in-depth" -- and for good reason. Segregating public and private networks, deploying overlapping controls for access and asset protection, constructing DMZs and bastion hosts -- these and other security techniques go a long way toward helping organizations secure their intellectual property and proprietary communications.

But creating a "living" layered security infrastructure is not a static, one-size-fits-all proposition. As network environments become more complex -- involving partner extranets, VLANs, application portals, Web services, secure remote connectivity, Internet/POP mail, instant messaging and so on -- architecting defense-in-depth into the network becomes more and more difficult. No one sets out to undermine security. But unless the security of the network evolves hand in hand with the ever-growing list of network services, the layers designed to secure it can actually introduce new and unforeseen vulnerabilities.

This article examines how security layers can break down, and how to architect the network to avoid these common pitfalls.

Where Layers Go Wrong

Layering insecurity may take years to develop. The layers of protection may be perfectly chosen at the inception of the environment. However, as the number of connections to business partners increases, the amount of remote access grows, and the variety of services offered to customers rises, the originally reasonable set of security layers in a network architecture can turn into a complex tangle of security mechanisms. In this environment, vulnerabilities arise in two broad categories:

  • Overtaxing, simply put, is requiring a security mechanism to do more than it can handle efficiently -- such as a single firewall configured to filter traffic traversing multiple systems. Overtaxed security mechanisms may fail to protect their applications and resources, or leave multiple resources open to a single exploit.
  • Mismatches occur when security mechanisms of different strengths -- say, both weak password authentication and strong, multifactor authentication -- are employed for different applications on the same network, application or device.

In both cases, a breakdown occurs because applications, services, databases, etc. are not separated. It's easy to see why: Separate systems cost money. Also, when IT managers understand a security mechanism, they may trust it to protect enterprise resources beyond its useful range. In other words, a manager may feel safe by simply "adding another rule" as each new service is added to the DMZ. But how does he know he's secure when he's defined so many rules that he doesn't understand the configuration anymore-i.e., when the firewall is overtaxed?

Overtaxed and Mismatched Security Layers

On their own, both overtaxed and mismatched security mechanisms can open up real dangers. In worst-case scenarios, overtaxing and mismatching are complementary problems: each one exacerbates the other.

For example, a business enterprise may run e-commerce applications and provide employees with remote access to their home directories, shared drives, etc. The employees authenticate with tokens, but the user database is administered via telnet, which is notoriously insecure. This is a security mismatch, with the vulnerable administrative service undermining the reasonably strong user authentication.

The implications of overtaxed and mismatched security layers can reach deep into the enterprise, affecting multiple data stores and access control mechanisms.

Now suppose that a critical e-commerce application with access to highly sensitive financial information is run on a set of servers in the same DMZ as the remote access service. Collocating these services overtaxes the security mechanisms. If an attacker exploits a vulnerability in the e-commerce app, the proximity of the remote access service places employee resources at risk as well. The danger is compounded because the attacker can exploit the weak telnet-based administration. The overtaxing imperils multiple resources, while the mismatch paves the way for the attacker to do maximum damage.

The implications of overtaxed and mismatched security layers can reach deep into the enterprise, affecting multiple data stores and access control mechanisms. To illustrate, consider a scenario in which a new Web application -- let's say, for sales reporting -- is added to an enterprise. The new app requires a proprietary Web server that is different than those currently deployed. In addition, security policy requires that the server be located in the DMZ. The company decides to install the application in the same DMZ with other applications and services and use an existing database server. There are good business reasons for this configuration, but there's risk to other services and applications in the DMZ, as well as the back-end network where the database server resides.

Databases. The sales reporting application deals with a lot of highly sensitive customer-specific and financial information that must be stored in a database. The company's decision to use an existing database server to house the new application's data is understandable: The organization doesn't have to buy a new server and may not even have to add storage. Deployment will be quicker, and there's less administrative overhead and maintenance than with a new server. But there's an overtaxing problem here: if one database is compromised, others on that box are also at risk.

Authentication. Organizations often use a common authentication mechanism for multiple purposes for the sake of convenience. Let's assume our hypothetical enterprise has decided to add customer data and authentication to the same authentication service and database (i.e., the same domain) it uses to authenticate employees and store their user data. The advantage of using the same authentication infrastructure -- say, LDAP -- to authenticate customers in a Web production environment is that it (1) saves money, (2) adds little administrative burden and (3) is well-tested and understood.

While collocating customer and employee authorization credentials on the same server is an administrative convenience, the authentication mechanism in this scenario is overtaxed. The LDAP is being asked to authenticate two very distinct sets of users in a single zone. The danger of this approach is that a compromise of the authentication mechanism affects both user groups. A faulty configuration of a firewall or the authentication system could give outsiders a path into the internal corporate network or expose production machines and services to insider attack.

A mismatch could also easily compromise both sets of data. For example, say the customers are authenticated through smart cards, while employees are authenticated through passwords. An attacker exploiting a vulnerability in an application could use a password cracker to circumvent the weaker employee authentication to access the customer data as well.

Firewalls. The sales reporting application's proprietary Web server receives customer requests via HTTP through an Internet firewall and contacts the database server and application server through a separate firewall. Sound OK so far? Not really. Though the Internet firewall must be configured to allow connections from the Internet to the new Web server, it does nothing to protect the existing services from compromise of this new (and, therefore, perhaps less understood) Web application. All the traffic is legal as far as the firewall is concerned, but the back-end communication between the new server and existing database opens up ports to allow the Web server to make database requests, and therefore opens yet another legal path for potential attacks. The firewalls do virtually nothing to protect the servers in the DMZ from successful attacks on one another, or on the back-end database server.

If the application requires more connections to other components (e.g., DNS, mail, administration), the problem becomes worse. The more combinations of connections that are allowed by the firewall, the less useful the firewall is in preventing or detecting intrusions. If the new server is compromised, all the servers that can be legally contacted by that server are at risk. In short, the more complex the environment -- the more the firewall is overtaxed -- the more difficult it is for the firewall to provide the protection the system managers assume they're getting.

Firewalls can be overtaxed in other ways. In an effort to save money on devices, network designers often build six- to eight-legged firewalls, controlling and monitoring traffic between several special-purpose DMZs, department networks and extranets. The result is a complex set of rules that's virtually impossible to manage, verify and test. More often than not, once the device is working "properly" -- in other words, it actually allows the necessary traffic to flow -- the configuration permits many more connections than it should. Developing a correct rule set in such a complex environment is nearly impossible.

IDSes. IDS systems can be overtaxed in ways similar to firewalls. In this same DMZ and application environment, the IDS must be tuned to look for connections and communications that should be denied. While the IDS is not limited to the boundaries between networks the way firewalls are, they fail to be useful in networks where almost all traffic might be valid -- that is, networks in which too many applications are using the same protocols for different purposes and data with different levels of sensitivity.

Suppose our DMZ includes a product information service for customers -- unauthenticated or weakly authenticated users -- along with the sales reporting application, whose users must be strongly authenticated. Both these applications use the same protocols and the same Web, application and database servers.

There are two potential problems here. First, the colocation of the low-sensitivity, weakly authenticated product information services and the highly sensitive sales reporting service is a security layer mismatch. A successful exploit gives an attacker the opportunity to crack the product information application and access the reporting app and the valuable information in the back-end database.

The overtaxing occurs because the IDS must monitor (at least) two different sets of permitted connections, those of the nonsensitive information service and those of the ultrasensitive sales application. Given the number of shared components, the IDS must assume that all connections for both services are valid. The problem comes when a weakness in the product information service provides an intrusion opportunity. The IDS won't be able to recognize the difference between valid connections made by the sales application and connections made by the compromised product information application.

Conversely, in networks where the communications are limited (the number of connections between systems is small and the types of protocols used by applications are restricted), the IDS can more easily recognize illegal connection attempts. In our hypothetical network, if the sales reporting application were allocated a dedicated database server, the IDS could monitor for connection attempts to any machine other than that specific server. To put it another way, the fewer legal connections there are, the easier it is for the IDS to detect illegal attempts.

Administration. One of the most overlooked mismatch problems is ignoring the importance of securing administrative interfaces on systems and applications. The problem is that many products and applications -- including many security products -- rely on the trusted nature of the environment in which they run to maintain their security. In other words, the products are designed to be run behind firewalls. These products' administrative interfaces and APIs support either weak or no authentication. The designers assumed that only authorized systems or users can gain access to the ports. Administrators and network designers who are unaware of this vulnerability may construct networks that allow access to these services, introducing a significant weakness to the network.

Separation of services into zones is not an all-or-nothing philosophy. The more complex the environments -- the more applications, the more levels of sensitivity -- the greater the need to define separate zones.

For example, say the sales reporting application provides an administrative interface that is accessible through HTTP (weakly authenticated and unencrypted) and does not limit unsuccessful login attempts. If an attacker is able to penetrate the perimeter, this weakly authenticated interface may be the vulnerability that can lead to exploitation of the rest of the network. For example, if the attacker gained access to the network through a firewall misconfiguration, he might use a password cracker on the administrative interface and eventually gain access to the device. The vulnerable interface undermines the stronger security mechanisms that protect the rest of the application or services in the network.

Host hardening. A different type of layer mismatch that's common in production environments is inconsistency in hardening host systems in a DMZ. Enterprise administrators make the mistake of reviewing the configurations of only "vital" machines -- such as mission-critical Web and application servers -- while neglecting the machines that handle less sensitive tasks.

The problem here is that even the most innocuous machine in the DMZ may give an attacker a foothold from which he can launch an attack. Test systems and staging servers used to transfer files into production are often not the focus of in-depth security reviews or extensive hardening. These systems are weak links; they frequently are deployed on the network with default configurations, with sample code and no virus protection. They may also be prime targets for insider attacks.

Making Security Layers Work

The two classes of layering problems we've described -- mismatches and overtaxing -- are really just symptoms of the same underlying problem: lack of separation of applications and services of varying levels of sensitivity. The way to address or avoid layering problems is not to abandon layering, but to design environments that use layering sensibly. This is not an argument to deploy the strongest security mechanisms at all layers and in all environments. That may be impractical or too costly. The key to making the most of security layers is segregating sensitive data into separate zones.

1. Establish trust zones to separate data and services to avoid overtaxing. The most common way to establish zones is to build a network that is dedicated to a particular purpose, set up boundaries with firewalls and routers to control and restrict traffic to and from the network, and locate only those services necessary for this purpose in that zone.

Think of the example in which the overtaxed LDAP was used for both customers and employees. The key to that problem is recognizing that the two groups of users are different and that sharing the same authentication mechanism unnecessarily connects the two environments. This dependency makes each more vulnerable to security failure. The solution is to separate the environments using routers and firewalls and use two separate authentication services and databases with no trust links between them.

Enterprise functions like e-commerce DMZs, e-commerce middle-tier services and remote access services are good candidates for isolation. These services offer very different functions from one another, deal with different customers/users, and tend to use a particular set of protocols. The fact that the communications and connections are dedicated to delivering a particular application allows admins to focus on controlling and monitoring these communications -- and not spreading the effort across many different areas.

One of the most important rules to remember when establishing zones is to simplify. Our example of the multilegged firewall illustrates how complexity leads to overtaxing. The lesson is that several more simply configured firewalls will likely be easier to configure, verify, test and maintain.

Establishing functional zones makes it easier to decide which connections should be allowed between environments. Monitoring traffic is simpler. Recognizing illegal messages, connections and even application requests becomes possible. While the cost for equipment may be higher, they are offset by reduced administrative costs. Each zone is more scalable, because it is not so dependent on shared resources. The ability to verify the proper behavior of the systems is substantially improved. Additional suggestions include:

  • Deploy application proxies to inspect specific traffic.
  • Use intrusion detection to watch for connections other than the ones specifically designed to be used by the application.
  • Use standard builds of operating systems and standard configurations to run your applications and supporting services.
  • Avoid providing insecure services like FTP, telnet, SMTP or SNMP, which may be necessary in a shared environment, but not in one dedicated to an application.

2. Separate services within zones for more granular control, increasing the effectiveness of the existing layers. Within a network zone, there are still more opportunities to establish boundaries that reduce the chance that security mechanisms may be overworked. Separating services like DNS, mail, Web or file transfer onto dedicated systems is a key element of reducing the likelihood that a compromise of one service could affect others.

Even on a single system or a single database, highly granular separation can increase security by compartmentalizing different aspects of applications to:

  • Run as different machine users.
  • Query as different database users.
  • Use separate directories, files and database tables.

Assigning separate databases to separate applications or application tasks (e.g., read-only access vs. read-write access), limiting database requests to stored procedures and prohibiting ad hoc queries will make it easier to detect when something's gone awry. The more predictable and bounded the queries that an application will issue, the more opportunity there is for detecting when someone is attacking the database.

For example, let's say an application is split into one component that requires read-only access (e.g., reporting applications) and another component that requires write or transaction access. These different components can be configured to run as users with only the required privileges. Further, application requests are issued exclusively through stored procedures, and the database server is configured to reject ad hoc queries. These limitations ensure that a compromise of a read-only component would only lead to read access, and prevent unauthorized access to the database. Disallowed inquiries would be a good indication of an intrusion.

Separating applications at the database level provides security benefits in much the same way as separating applications into zones, eliminating unnecessary dependencies and making it easier to detect problems. These techniques can help at every layer where security mechanisms provide an opportunity for distinguishing one application from another, one user from another, or a valid request from one that may be an attack. This approach makes the database a working partner with the IDS rather than just another overworked (or underused) security layer.

3. Analyze security mechanisms for layer mismatches and understand layer relationships. Recognize where weak mechanisms may be undermining the effectiveness of stronger ones.

Finding layer mismatches can be tricky. The relationships between security layers may not be apparent at first glance. A security designer needs to analyze each zone for the weaknesses in every product or service used in a network, application or system. The designer must also analyze each product deployed in the environment for weakly authenticated interfaces and trust links (like shared authentication services and file sharing that cross zone boundaries).

Further, the security designer must look at how management or administrative functions are secured and whether they must be hidden or segregated. Specifically, administrators need to look at the security of file system backups, application administration interfaces and system and network administration.

If weak access methods (such as unencrypted passwords) are required by some device, service or application, the weak method should be protected or hidden by a stronger mechanism. One of the most effective methods is to disable network access to the insecure access path (e.g., HTTP) and instead manage it locally with a more secure tool (e.g., SSH, Microsoft Terminal Server or Citrix MetaFrame). Once the analysis is complete, remedial actions may include:

  • Isolating the insecure service into a separate zone, where compromise won't have serious repercussions
  • Protecting or hiding the weak mechanism behind a stronger one
  • Replacing the insecure service with a more secure one
  • Eliminating the insecure service altogether, if possible; or
  • Accepting the weakness and deploy compensating controls (e.g., IDS, enhanced monitoring) to mitigate the risk.

4. Conduct a risk/cost assessment. Decide where the organization simply can't afford to address weaknesses and where it will need to establish compensating controls.

Separation of services into zones is not an all-or-nothing philosophy. There are cost and scale considerations associated with every decision. The more complex the environments -- the more applications, the more levels of sensitivity -- the greater the need to define separate zones. The layering problems described here are mostly due to blind trust in security mechanisms rather than thoughtful application of specific mechanisms to specific tasks.

5. When you're done, don't assume you're done. Doing a full analysis of your layered security every year is a good idea. Another good approach is to repeat the assessment with every major addition to the enterprise.

Security problems sneak up on you. They often are the result of minor additions and changes over years. Periodically review zones and security mechanisms before overtaxing and mismatches cause problems.

About the author: Richard E. Mackey Jr. is a principal at SystemExperts, a security consulting company. He is a leading authority in the OSF distributed computing environment, and has advised leading Wall Street firms in overall security architecture, VPNs, enterprise-wide authentication and intrusion detection.

 

This was last published in June 2002

Dig Deeper on Network device security: Appliances, firewalls and switches

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly.com

Close