The most amazing thing about the last ten years is how little things have changed technologically. Firewalls, IDSs, worms and viruses, spam, denial of service: they're all still here. Sure, there have been technological advances in both attacks and defenses – phishing is relatively new, or example – but for the most part we're using the same technological defenses against the same technological attacks.
What has changed is the business motivations. Security is now big business, both on the attackers' side and on the defenders'. We're seeing more criminal attackers and more criminally motivated hacking. And, at the same time, IT security is something that the CEO now cares about. This is the engine that forces real technological change. We're already seeing very sophisticated and professionally written malware, and business is starting to demand better security products. Many have cited compliance mandates--Sarbanes-Oxley in particular--as having the most impact on the security industry. A) Do you agree? B) Does the need for regulation demonstrate a failure of security to carry out its responsibilities?
For years, we in the security industry have been futilely trying to convince companies to pay more attention to their security. Basically, there are two purchasing motivators out there, fear and greed, and security has always been a fear sell. But greed sells much better. Whenever you see slogans like "we take care of security so you can take care of your business," you can be sure that some consulting firm has told that company to try to turn security into a greed sell. It never works. ROI models never convince, either. Security is fundamentally a fear sell, and so it doesn't sell very well.
Regulation--SOX, HIPAA, GLB, the credit-card industry's PCI, the various disclosure laws, the European Data Protection Act, whatever--has been the best stick the industry has found to beat companies over the head with. And it works. Regulation forces companies to take security more seriously, and sells more products and services.
They're so different, it's hard to even make a comparison. Applied Cryptography was a book written for engineers and programmers. It was published in 1993, in the early days of the popular Internet. It was also the zenith of the "crypto wars," when the U.S. government wanted to ban strong cryptography. I wanted to make cryptography understandable and accessible, and foster its ubiquity in computer and communications systems.
Beyond Fear was written for everyone; my mother was my target audience. It was published in 2003, during the height of the post 9/11 stupid-security madnesss. I wanted to make security understandable and accessible, to foster rational debate instead of fear mongering. What is legacy/lasting impact of each of those books?
I think they both accomplished their task. Applied Cryptography did foster the inclusion of cryptography in many, many commercial products around the world. Many of those systems weren't very good, unfortunately, but that's another story. And Beyond Fear has succeeded in helping people understand security trade-offs, debate security options, and--in general--think about security.
Both books are still selling well today, which is a legacy in itself. But the most complementary thing anyone can say about my books, and my writing in general, is: "You changed the way I think." The most important legacy of Applied Cryptography is the number of people who entered the security industry because they read it, and the most important legacy of Beyond Fear is the number of people who see the world of security differently, and more skeptically, and more rationally, because they read it. Cryptography was once illegal, now it's encouraged as business best practice, and some regulations punish companies for not using it to protect data. What was the turning point here, in your opinion?
Electronic commerce was the killer app for cryptography, and that's what forced it out of the shadows and into the mainstream. But really, we won the crypto war because cryptography doesn't matter nearly as much as we thought. Back in the mid-1990s, we thought cryptography would protect our data from outsiders. But the real problems are in computer and network security. It doesn't matter how good your encryption is if the bad guys installed a Trojan on your computer, or a keylogger. Or if they can guess your encryption key. I think the FBI realized, a couple of years before we all did, that cryptography wasn't all that important. I can remember a presentation you did at RSA about 5 years ago when you predicted big things for cyber insurance and how liability would drive a change in secure software development. What factors have gotten in the way of this happening? Do you think this idea still has legs?
I still give talks about liability and the computer industry. Liability is coming--there's no way we can transition to a traditional, mature industry without it--and with liability comes insurance. It's just taking slower than I expected, probably because things are still moving too fast. I think the transition to software-as-a-service will foster liability more. If you're running my application and hosting my data, it's much more obvious that you're liable if something goes badly wrong. And once that becomes the norm, insurance will step in to reduce that liability risk. You've written a lot lately about the psychology of security and tried to teach people how to think about security. How can a CISO apply what you've written and researched to tweak or redefine their security programs?
If there's one thing I've learned in all my research into human psychology and how we deal with security, risk, trade-off, costs, and decision making, it's that people are not rational. On the one hand, this is obvious. On the other hand, we who work with computers like to pretend that people are just semantic computers: logical, rational, and so on. It's just not true. People make decisions in completely irrational ways, breaking all sorts of rules of logic while doing so. Our brains are weirdly engineered, with overlapping systems, fail-safe overrides, memory glitches, and systemic bugs. And while we are superbly engineered for the cognitive problems that arise while living in small family groups in the East African highlands in 100,000 BC, we're much less suited to 2007 New York.
And when I lecture CISOs on the psychology of security, my goal is for them to leave thinking differently about how people think. Because if you don't get the psychology right, your security system will fail regardless of how good the technology is.
Michael S. Mimoso is editor of Information Security. Send comments on this interview to email@example.com.