Point: Marcus Ranum
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In 2007, I wrote an article on execution control in which I explained why antivirus was a dead-end idea, and predicted an eventual switchover from blacklisting to whitelisting. I couldn't have been more wrong so I periodically catch myself wondering if I'm one of a small percentage of the people who "get it," and if the entire security world has its collective head where the sun doesn't shine. Obviously, malware is a big problem and there's not going to be a silver bullet solution to it, but the industry's response to system integrity continues to be ineffective, expensive and a wasteful of time and energy.
To briefly recap: blacklisting is the oldest algorithm in computer security. Know what's bad, develop a pattern-matching system to detect it, and ring a bell when you detect the pattern. You can earn extra credit for detecting the bad thing just before it happens, and preventing it from happening. In a nutshell, that's what's behind many antivirus, intrusion prevention/detection systems, and spam filters. The whitelisting approach is the opposite -- have a list of authorized/known good things, and permit only those. The effectiveness of blacklisting depends on the depth and accuracy of the blacklist, and the effectiveness of whitelisting depends on your ability to assess what should be allowed on the whitelist.
One of the standard complaints against whitelisting is that it's too difficult to manage the whitelist. That may be true, but it's also difficult to manage the occasional outbreaks of malware and targeted malware that slide past blacklisting systems. I don't think organizations make an effective assessment of the time spent managing their runtime environments -- as the tax man, says, "You can pay me now, or you can pay me later." When you add the cost of data leaks and customer data leak notifications, it seems absurd to me that so many enterprises continue to treat their runtime environments as "anything goes."
In the real world, we see whitelisting effectively used for very huge, significant applications. Consider passports as an application of whitelisting at a national border: if you're on the authorized-citizens list, you have a passport for the country you want to enter, and if you're carrying an allied passport, you're on the greylist. And, of course, you could be on the blacklist/watchlist. Passports are not anywhere near as accurate as execution controls can be, because they're easier to forge, but obviously they can be managed in the large -- the very large -- with a bit of attention to detail. I think primarily what's lacking is the willpower to fight the political battle of convincing users that "this is a corporate asset, not your personal computer."
Consider another application of whitelisting: app stores. While enterprise IT has continued to blithely assert that taking control over its runtime environment is too difficutl, smartphone users have bought into a number of "walled garden" software distribution models. I know there are still users who see that as a "jail," but the infrastructure is slowly getting put in place where maybe, just maybe, we'll see a shift from "trust whatever you run" to "run whatever you trust." If the model fails, it'll fail because a vendor gets greedy and goes for a market lock-out that encourages rampant jail-breaking to circumvent a software monopoly. As we've seen, however, people don't seem to mind a monopoly, as long as it lets them get what they want at a reasonable price with a minimum of effort.
Willpower appears to be short in supply at the enterprise level. Rather than taking control of the desktop, IT managers wring their hands and say "It's too hard!" At the same time, they complain it's difficult to get good employees if you don't let them keep up with Facebook and Twitter all day -- which, if you think about it for a second, is nearly oxymoronic. A couple years ago, I worked on a project involving some massively expensive robotic devices that had been infected because one of the maintenance personnel had malware on his laptop, which he plugged into the robot's control network. Management at the company said it was difficult to control personal use of the laptops, but recognized the impact to its business was extremely expensive. The irony of the whole situation was that they only had six maintenance engineers in the first place -- they had lost millions of dollars in order to save the cost of six locked-down netbooks that could have stayed in the toolbox with the other maintenance gear, instead of in the engineer's briefcase. I see this kind of failure over and over again in the industry: point-of-sale terminals unlocked get drive-by malware and require a customer data exposure notification, engineer carries Stuxnet into a SCADA network on laptop, etc.; penny wise and pound foolish, indeed.
What do I think is going to happen? We're going to see the divide between controlled and uncontrolled environments continue to deepen. There have already been many instances of malware in app stores; I'm betting we'll start seeing more interest by app store providers in vetting the software better. Meanwhile, enterprise IT will keep getting owned, as antivirus technology falls further and further behind. The endgame may come in a few years, when -- after a great deal of nail-biting -- security technologists are in the unpleasant position of having to recommend executives use an iPad instead of a laptop -- switch to an embedded device instead of a general-purpose operating system, and keep your e-mail "in the cloud" rather than on the device. In other words, the antithesis of everything many of us currently think makes sense. It could happen. And I'm sure Bruce (and plenty of you) will be happy to let me know I'm wrong.
Marcus Ranum is the CSO of Tenable Network Security and is a well-known security technology innovator, teacher and speaker. For more information, visit his website at www.ranum.com.
Counterpoint: Bruce Schneier
The whitelist/blacklist debate is far older than computers, and it's instructive to recall what works where. Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it's easier -- although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn't--but because it's a security system that can be implemented automatically, without people.
To find blacklists in the real world, you have to start looking at environments where almost everyone is allowed. Casinos are a good example: everyone can come in and gamble except those few specifically listed in the casino's black book or the more general Griffin book. Some retail stores have the same model -- a Google search on "banned from Wal-Mart" results in 1.5 million hits, including Megan Fox -- although you have to wonder about enforcement. Does Wal-Mart have the same sort of security manpower as casinos?
National borders certainly have that kind of manpower, and Marcus is correct to point to passport control as a system with both a whitelist and a blacklist. There are people who are allowed in with minimal fuss, people who are summarily arrested with as minimal a fuss as possible, and people in the middle who receive some amount of fussing. Airport security works the same way: the no-fly list is a blacklist, and people with redress numbers are on the whitelist.
Computer networks share characteristics with your office and Wal-Mart: sometimes you only want a few people to have access, and sometimes you want almost everybody to have access. And you see whitelists and blacklists at work in computer networks. Access control is whitelisting: if you know the password, or have the token or biometric, you get access. Antivirus is blacklisting: everything coming into your computer from the Internet is assumed to be safe unless it appears on a list of bad stuff. On computers, unlike the real world, it takes no extra manpower to implement a blacklist -- the software can do it largely for free.
Traditionally, execution control has been based on a blacklist. Computers are so complicated and applications so varied that it just doesn't make sense to limit users to a specific set of applications. The exception is constrained environments, such as computers in hotel lobbies and airline club lounges. On those, you're often limited to an Internet browser and a few common business applications.
Lately, we're seeing more whitelisting on closed computing platforms. The iPhone works on a whitelist: if you want a program to run on the phone, you need to get it approved by Apple and put in the iPhone store. Your Wii game machine works the same way. This is done primary because the manufacturers want to control the economic environment, but it's being sold partly as a security measure. But in this case, more security equals less liberty; do you really want your computing options limited by Apple, Microsoft, Google, Facebook, or whoever controls the particular system you're using?
Turns out that many people do. Apple's control over its apps hasn't seemed to hurt iPhone sales, and Facebook's control over its apps hasn't seemed to affect Facebook's user numbers. And honestly, quite a few of us would have had an easier time over the Christmas holidays if we could have implemented a whitelist on the computers of our less-technical relatives.
For these two reasons, I think the whitelist model will continue to make inroads into our general purpose computers. And those of us who want control over our own environments will fight back -- perhaps with a whitelist we maintain personally, but more probably with a blacklist.
Bruce Schneier is chief security technology officer of BT Global Services and the author of Schneier on Security. For more information, visit his website at www.schneier.com.