Marcus Ranum: Richard, thanks for taking the time to talk; it’s been a while and we’ve got a lot to catch up on!...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
In the last couple of years we’ve seen a marketing push over advanced persistent threats (APT), a campaign attacking security companies, and the Aurora/Shady Rat attacks. I assume you’re still a fan of network security monitoring? It’s always seemed to me we,as a community, have been cutting costs in the wrong place. We need more monitoring, analysis and brainpower. Where do you see things going?
Richard Bejtlich: It's been quite a ride the last few years, indeed. Overall, I think there’s a growing sense that becoming an intrusion victim is a possibility for lots of organizations, and a certainty for many depending on the sector and assets at stake. Enough of a variety of organizations have been compromised that many executives are asking, “Are we next? How would we know?” and similar tough questions. As a result, we're seeing increased interest in “Are we compromised?” assessments, rather than “Are we vulnerable?” assessments. It’s one thing to have holes, but quite another to determine an intruder is actively exploiting them.
Marcus: That's the name of all of our pain, it seems. I don’t think a week has gone by where I haven’t gotten a question in the form of, “What do we do about APT?” It seems a lot of organizations sort of declared victory at the point where they could get a rudimentary handle on malware, but very few have the right mindset and tools in place to detect a seriously professionalized attack. I’m sure everyone in the security community had a moment of serious self assessment when the RSA and HBGary breaches happened -- it's certainly made me reassess what “good enough” security really is. Obviously one piece of the puzzle is designing your processes to withstand attack -- but let's talk about detection: Are you still as much of a proponent of security monitoring as you used to be?
Richard: Yes, monitoring is one way to achieve visibility. Visibility helps in several ways: 1) Visibility should guide your defenses toward countering actual threats to actual vulnerabilities, rather than theoretical threats to assumed vulnerabilities; 2) Visibility should tell you when your defenses have failed, so you can conduct incident response; and 3) Visibility should provide metrics and assessment of all aspects of the risk equation. Note, I say “should” for all those elements. Many assume deploying a SIEM, logging packets dropped by the firewall, or running alert-centric intrusion detection systems provide “visibility” when they likely do not.
Marcus: I love the way you think about this stuff! You've touched on a ton of important issues in that one response. What you're really talking about is an information ecosystem in which all the components feed backward and forward into each other. If you've got an idea what should be happening, then you can invert that to get an idea of what shouldn't be happening. If you’ve got an idea where you're succeeding, then maybe everything else ought to be examined more closely, because it is a possible failure. I completely agree with you that a lot of organizations go out and buy a SIEM, then wait for magic to happen, but it doesn't, because that information ecosystem is not populated with the knowledge that is necessary in order to achieve the benefits of having the SIEM in the first place. I get very disappointed when I hear discussions about SIEM begin and end with, “What reports does it give me about what it’s able to learn about my network?”
It seems to me the direction this needs to go is to figure out how to build policy- and purpose-centric systems that let us loosely specify what ought to happen on our networks, then subtract, “what probably ought to happen” from “what is happening” and look closely at what's left. I have one friend who is working on getting his HR department to forward the security team information about the job purpose of employees, so they can try to construct positive forward-looking activity maps. After all, if someone was hired to be in the research team, their internal network connectivity ought to mostly be with systems in the research cluster, etc. Do you think that kind of approach is going to work? Or do you think it'll be bypassed as "too much work?"
Richard: I like the sound of that approach, but I do think it will be considered "too much work." It seems like correctness, at multiple levels -- platform, application, usage -- would be a more successful methodology. However, I don't know how possible it is for security to determine what is correct. If you ask business owners, they usually can’t describe how their assets should be used. One idea would be to have new systems deployed with documentation describing expected usage. Unfortunately, locations that demand such documentation, such as industrial automation and control systems, often lack documentation too! It’s probably more realistic to deploy tools that do a good job describing what live systems do, and then involve humans who can say, "Yes, that's ok" or "No, that is weird."
Marcus: What you’re talking about there is fast-feedback workflow systems. Do you know anyone who's made any good progress with that approach? I used to daydream about trying to see how far that model could be pushed if every effort was made to make the workflow update as easy and rapid as possible:
"Are you interested in this event? [yes] [no] [always events like it] [never events like it] [involving this system] [not involving this system] [involving this application]" etc.
I notice a lot of the stuff we're talking about here is not part of the commercial mainstream of security, because it's thought and effort-intensive; customers want to buy something they can install, which requires no tuning and no analysis.
Richard: When you mentioned that approach, it reminded me of a host-based firewall that asks if you want to allow application X to connect out to IP Y, or listen on port Z. It also reminds me that most of our analytical tools don't allow the application of analyst knowledge through integration. For example, you can't usually "tag" NetFlow logs or "mark" packets such that other people can learn or make decisions. Some commercial tools probably do that, but the majority of tools and techniques are still for rendering only, not knowledge integration and decision support -- never mind the possibilities of "social analysis" where groups analyze data together. There's a good research project for someone!
Marcus: One final question: Are you working on any more books? I still feel your extrusion detection book is full of important concepts that more people should read. What are you up to these days?
Richard: I would like to write one or two more books. I posted this with some ideas to my blog. Basically, I'd like to write another technical book, but then also write more of a strategy book.
Marcus: Thanks so much for your time!