Published: 01 Feb 2002
Question: In your book, Security Engineering (John Wiley & Sons, 2001), you wrote that it's quite common for designers to secure the wrong things. How do you think that system designers can develop the ability to know what the right thing is?
Answer: Well, ultimately, it comes down to experience. One of the reasons I wrote the book is to make a lot of case histories available. Unfortunately, security is a business driven by fashions, and at the moment the fashion is for messing around with firewalls, virtual private networks (VPN), worrying about stack smashing attacks ... this "evil hacker on the Internet." But, of course, the real world isn't like that.
The fashion has been for different things at different times. Sometimes the evil hacker has been the person who used to justify the security budgets. In the early 1980s, the technology of choice was the dial-back modem. Computer viruses came along, and people forgot about the hacker. He was no longer necessary as a justification for the information security department's existence. Suddenly, it became their task to sweep all incoming floppy disks for viruses.
Q: How have your philosophies about infosec evolved over the last 15 years?
A: At the beginning, I took a very technical view of things. I was very interested in details of ciphers and protocols and their implementation. As time goes on, as I see more things failing, I place more emphasis on system-level aspects. Most recently, I've been looking at the economics of information security, because it has become clear to me that many systems fail because the people who are in a position to protect them have no incentive to protect them.
A good example is here in the U.K. where, unlike in America, if you have a dispute with your bank over whether a particular cash machine transaction was made, then generally the onus is on you to prove that you didn't make it, whereas in America, the onus is on the bank to prove that you are trying to defraud them by making this claim. So, what happens is that in Britain, bank staffs know that if they rip off customers' accounts, the complaints won't be taken seriously. So they loot the system -- a classic example of a perverse incentive.
1992 -- Joined Cambridge University, where he holds a faculty post as Reader in Security Engineering and leads the security group at the Computer Laboratory.
1993 -- Wrote a definitive paper on the ways in which automatic teller machines are defrauded. Also wrote the paper, "Why Crypto-systems Fail," one of the seminal works on the difficulties of creating a secure encryption system.
1995 -- Coauthored "Programming Satan's Computer" with Roger Needham, a paper that, by describing a system designed to be deliberately deceptive, provides insight into the difficulties of creating viable encryption protocols.
1996 -- Proposed a filestore, "The Eternity Service," that's highly resistant to censorship and sabotage by being distributed over the entire Internet. The system later provided the inspiration for P2P systems, such as Gnutella and Freenet.
1997-1998 -- Codeveloped Serpent, an encryption algorithm that was a finalist in the competition to become the Advanced Encryption Standard.
2001 -- Published Security Engineering-A Guide to Building Dependable Distributed Systems, which uses case histories to teach the principles of security.
Q: Your presentation at the Computer Security Applications Conference last December discussed microeconomic incentives. Are corporate infosec managers economically motivated to exaggerate the security threat?
A: Sure. And it's entirely unclear to me that companies spend too little on information security. I suspect that very many of them spend too much. I realize that this is very much a contrarian view, and that there's a huge industry of people talking up various obscure threats and attacks. But, by and large, markets get it about right.
Q: There doesn't seem to be a strong indication that the problem is severe then?
A: Well, indeed. And given the amount of near-hysterical pressure selling -- for whatever happens to be the technology of the day, whether it's a firewall, PKI or dial-back modem -- and given that such technologies tend not to address the main operational threats to business, you can make a strong case that many companies spend too much, rather than too little.
Q: Is it a further case of protecting the wrong thing?
A: Oh, absolutely. In many cases, the security manager may be quite cynical about this. He may say, "Well, my main task is to defend the company against dishonest insiders, but I can't say that to the board of directors, because they are fiercely protective about their staff. So I have to tell this great tale about evil 14-year-old Argentinean hackers who will take the company completely to pieces if we don't spend $10 million on technologies X, Y and Z." Getting the budget, he spends $8 million of it on proper internal controls. That is perhaps a valid strategy in some circumstances, but I have my doubts as to whether it's always that clean and that cynical.
Q: Why are you putting so much emphasis on economics?
A: Among all the social sciences, the one with the greatest array of quantitative tools, and the one that appears to be the most directly relevant to information security, is economics. There's a whole bunch of different aspects of economic analysis that can be relevant. For example, the theory of games: sequential games between pirates and enforcers. Or, you can look at techniques that come from particular specialties of economics, such as environmental economics. Consider the analogy between the insecurity of the Internet and environmental pollution. Many insecure systems represent a cost that one can dump onto other people. If your system is hacked and used to attack other people, and you don't end up being fully liable for the costs of that, then, in effect, you've managed to dump some of the costs of your behavior on to others, in the same way that you do when you dump toxic waste into the river.
Q: What about the debate over disclosure? Can economics show us the best way to locate and publicize vulnerabilities?
A: People have been doing economics on this since the early days of reliability growth modeling. For example, there's a 1984 IBM paper that concluded the right way to deal with bugs was to fix them once eight separate people had reported them. At a very simplistic level, that obviously gives you one kind of economic model.
Another kind of model that we've been working with is how you can do statistics with large populations of bugs. If you have a product that has enough bugs in it to do statistics, then there are a number of things you can say. You can look at ways of adding up the individual Poisson statistics of the individual bugs in order to make a reliability growth model that works for the whole system. What we find is that systems become "reliable" an awful lot more slowly than you might hope.
Q: You make a strong case that the Internet attacker has a strong advantage over the defender. Can this be changed?
A: Well, it doesn't really matter. One analogy that I put forward in my book is that the Internet is very much like the large herds of gnu and other antelope that once wandered around Africa. If you are part of a herd of 50 million antelope, then the fact that there are 100,000 lions circling the edge of the herd doesn't matter all that much. The odds of any given gnu being eaten are acceptably low. But if you are one of the animals that wanders a bit outside of the ring, or if you are one of the big trophy animals with spectacular antlers, you can find yourself at a severe disadvantage.
Q: Will government health care privacy initiatives -- such as the Health Insurance Portability and Accountability Act (HIPAA) in the U.S. and similar laws in the U.K. -- encourage the development of new security techniques or technology?
A: The interesting thing about health care is that the threats are almost all internal rather than external, essentially because all abuses of private medical information are by authorized insiders. The people who have the power to compel -- insurance companies in America, the government in many European countries -- help themselves to data, which they then use for purposes that many patients might not approve of. They give it to suppliers; they sell it to drug companies ... whatever.
The information security problem is basically a problem of politics and regulation, rather than technology. Even if you were to encrust all of your medical systems with all sorts of fancy firewalls, encryption and goodness knows what, that wouldn't fix the problem. The problem is that somebody who has a login to the system is passing the data to somebody that you, the patient, would prefer not to have it.
Q: Do you think that people can be fixed?
A: Unfortunately, what may happen more and more is that unnecessary security technologies will be deployed as a smoke screen. The government will say, "We have given you a medical smart card, and the smart card is tamperproof; therefore, your health records are secure." And meanwhile, of course, the government copies of your health records will be sold to drug companies, and you'll have no say in the matter whatsoever. And if you protest, you'll simply be told that you have a smart card, and therefore your medical records are secure. This is the kind of thing that goes on, and, to a very large extent, "security" is used in such applications as a means of bamboozling the customer.
In Security Engineering, you discuss the tachograph, which seems like a very low-tech device. What's the relevance of such technology to someone who's designing an e-commerce system? (In Europe, all trucks are required to have a tachograph, which creates a paper record of their speed and hours of operation.)
It's a classic example of an analog system that worked fine, but when it was replaced with a digital system, it worked an awful lot less well. The deficiency is masked by people putting in things like smart cards and saying, "This is secure. You see, it's a smart card; therefore, it's secure because smart cards are called 'secure smart cards.'"
European governments had a system for monitoring bus and truck drivers' working hours that worked fine. But rather than leaving well enough alone, they put forward a kludge, for political reasons, and justified it by sprinkling some security dust on it. I think there are likely to be many, many cases in e-commerce where the digital replacement of old manual systems isn't an improvement -- certainly not from the customer's point of view.
Q: Do we need to rethink the old manual systems and replicate them electronically in a secure way, or do we need entirely new processes?
A: Well, what often happens when you automate a system is you take a manual system and you automate it. That usually works-to some extent. But it may not work as well. As in the case of the tachograph, you lose a whole lot of analog data that is useful for audit and analysis purposes. And once you have a new digital system with new capabilities, you discover new ways of using these capabilities. But very often when people replace a manual system with an automatic one, what they are trying to do is pursue some other agenda, such as cutting the number of staff at the company -- "business process reengineering" or whatever -- and often things fail rather spectacularly because of the resistance this engenders.
These system-level issues are very, very important in security engineering, because what if you replace an established way of doing things that people more or less get on with, with one that they violently detest? You can then expect the amount of insider fraud and malfeasance to shoot up.
Q: What areas of research need to be done, but no one is interested in doing?
A: I think that we've largely sorted out cryptography. Sure, there's more we can do, but we've got tools that work. Cryptographic protocols, we're just about beginning to understand. Those people who have made a career out of studying cryptographic protocols are now able to design fairly good ones. Assurance of cryptographic devices -- and information systems in general -- is still a hard problem, but there's a fair amount of progress being done on it. Those technical aspects that are doable are being worked on. But the system aspects, where most things actually fail, aren't being worked on anywhere near as much.
There are many reasons for this. There's inertia -- huge inertia -- in the research community; most people only ever do research work on their thesis topic. And, I suppose it's general advice for scientists, not just for security people, that the work that the average scientist should be trying to do is work on some different topic. People do tend to get stuck in ruts. And that's tied up with the way academia is run, the way people are selected, the way they are hired, the way they get promoted, the way the social enterprise of science works.
About the interviewer:
Jay Heiser, CISSP, is an information security officer in the headquarters group of a large European financial institution. He is the coauthor of Computer Forensics: Incident Response Essentials (Addison-Wesley, 2001).