This article can also be found in the Premium Editorial Download "Information Security magazine: Dollars and sense: Getting the security budget you need -- and spending it wisely."
Download it now to read this article plus other related content.
For almost a decade, security professionals have commiserated about the vanishing network perimeter. In the old days, the main concern was the growing number of partners, contractors, suppliers, customers and other "trusted outsiders" who needed access to our private networks. In response, corporations reexamined domain structures and deployed stronger authentication, access control and authorization technologies--all in the name of enlarging the corporate "circle of trust."
Today, the challenges facing the virtual enterprise have intensified. An explosion in mobile computing and Webified applications have placed a premium on back office connectivity to anyone, from anywhere, with any device. This dynamic environment challenges conventional notions of network security:
With the growing need for business users to access the Internet from inside the private network, the protections we once associated with security--firewalls, bastion hosts, DMZs--are no longer effective. The actions of individual users and the growing reliance on various services tunneled through port 80 have undermined traditional malicious code defenses.
The any-to-any connectivity requirements between clients and servers and clients and clients require the network to be flat. Flat networks are inherently vulnerable to traffic flooding from malware, which is precisely what happened with January's SQL Slammer worm.
The burgeoning number and size of security patches make it nearly impossible to check or enforce patch currency compliance on the tens of thousands of servers and clients in medium to large enterprises. The problem becomes even more difficult when you consider all the network-connected machines you don't own or manage.
- Our private networks have become overly client-friendly. The pervasive deployment of DHCP for both stationary and mobile devices opens the door for anyone to plug into our private network. The security shortcomings of 802.11b have extended this problem beyond our physical boundaries.
The problem is clear, but what should we do about it? One option is to segregate the things you can't protect (clients) from those that you can (servers and the server network). As one CIO recently told me, "I don't mind sacrificing a few desktops, but I can't afford to have my network go down!" If we're going to separate clients from servers (and the server network), conventional wisdom suggests we simply create a "private client network." But on closer examination, the combination of connectivity requirements (any-to-any) and the physical topology of clients and servers makes this an extremely expensive proposition.
Another alternative is to simply treat all clients as outside devices. Although this defies conventional wisdom, it's actually an economical and reasonable thing to do. Perhaps half the clients (and growing) routinely go into the wild anyway. While this solution isn't perfect, there are a variety of client-centric tools and technologies--AV software, personal firewalls, VPNs (and even clientless VPNs)--that provide a practical level of protection consistent with the business value and limited risk of individual clients. By placing appropriately equipped clients, and the network they reside on, outside the core trust zone, we dramatically simplify the task of securing what remains: the servers and server network.
By moving clients outside, we essentially create an enterprise ASP environment. The security policy for protecting the environment is simple; it's the same sort of policy that other ASPs (Yahoo, AOL, MSN, Google, etc.) use to protect their servers and networks. Since everybody is on the outside, everyone can be subjected to strong authentication before accessing the server environment. Security configuration characteristics could be enforced prior to granting access.
The bottom line is that there would be no general-purpose clients, inherently vulnerable to contamination and attack, on the inside, where they could bring the network to its knees. This approach is not only effective in avoiding catastrophic risk; it's also less expensive than using conventional DMZs and firewalls in a fleeting attempt to secure clients on the inside.
About the author:
John Taylor is a DuPont Fellow for Information Technology (retired). The views expressed in this column are his own and do not reflect those of DuPont.
This was first published in March 2003