David K. Black
Published: 01 Apr 2002
Nineteenth-century satirist Ambrose Bierce once defined history as "a recording of mistakes we make so we shall know when we make them again." This couldn't be truer when it comes to computer security. As we have moved from mainframe to client/ server to peer-to-peer networks, security has come full circle on the thorny issue of access control.
Back in the '70s, access control to classic mainframes was defined by physical security. If you could walk up to the card reader and plop down a deck of punched cards, you could run a program. By and large, the data and program were altogether on those cards. With the advent of timesharing came the need for the operating system to separate the data from the code. However, this wasn't engineered with a great degree of efficiency. Largely, we relied on mainframe applications to manage access control. This meant access control was revisited, redesigned and re-implemented with every application.
Well, it didn't take very long for smart people to realize that this was a pretty dumb idea. In the '70s and '80s, researchers at the University of California at Davis grappled with some of the decisive issues surrounding computer security. Those researchers decided that it made far more sense to develop an operating system, or an "executive" as it was sometimes called, for managing access control, relieving application developers of that burden.
With operating systems came the notion of users and resources. Access control lists established relationships between users and resources, and the operating system had the job of mediating requests made by users to access resources. This meant that access control could be a service supplied by the infrastructure and applications could be considered "untrusted." In so doing, the access control problem would be solved once and for all.
The idea of an "untrusted" application was fairly revolutionary. Today, it's so obvious that we take it entirely for granted. It's elementary that OSes and databases have embedded access controls, and that applications take advantage of these features. This gave rise to the term "kernel," a small, well-protected, tamperproof mechanism that enforces basic security goals on behalf of untrusted applications.
Did we learn our lesson? If you think about today's three-tiered architecture -- Web server, application server and database server, each separated by a firewall -- you realize that we've come full circle. In most cases, application servers have an embedded username and passwords that allows them to access any information on the database server. Often, there's no model of access control that transcends a particular application, which means the access control mechanisms go largely unused.
In other words, as our systems have become more distributed and more heterogeneous, we have moved back toward an access control model we had the good sense to abandon 20 years ago -- if you can reach the application, you can run it.
However, there's hope on the horizon. The infrastructure for providing dedicated access control in a distributed environment is evolving. LDAP and the universality of directory services is one example. Via a protocol, a directory service provides information about users and resources and their access rights to multiple applications.
Another example is the Security Assertion Markup Language (SAML), an interoperable protocol that can exchange information about access control in a more distributed context. SAML will allow various authorities to transmit and receive assertions about users and transactions that go beyond identity. By defining "attribute assertions" and "authorization assertions" and the queries to go with them, SAML can facilitate a variety of transactions among attribute authorities, authentication authorities and authorization authorities.
The access control problems won't be fixed overnight. It will take time, just as it took time for computer systems to evolve from mainframes to distributed networks. However, meta-directories and standards suggest that there's a way to position access control where it belongs: an integral part of an infrastructure (now a distributed infrastructure) that's universally available to the application level.
When properly implemented and ubiquitously adopted, such tools may be able to solve the historic problem of access control. Today's developers and systems designers should take heed of the mistakes made by their predecessors and make use of the new technologies at their disposal. If they do so, we hopefully won't make the same mistakes again.
About the Author: David K. Black is a security technologies specialist at Accenture, where he focuses on wireless security solutions, secure Web portal architectures and risk assessment methodologies.