Nmedia - Fotolia

Get started Bring yourself up to speed with our introductory content.

Q&A: Rethink compensating controls, says Warner Bros. CISO

It is not hard to make the shift from independent controls for defense in depth to interlocking strategies, Ron Dilley tells Marcus Ranum, but careful planning is required.

This article can also be found in the Premium Editorial Download: Information Security magazine: Incident response procedures speed discovery-response time:

Ron Dilley is the chief information security officer at Warner Bros. Entertainment Group of Companies, the studio that brought us the Harry Potter films, Mad Max: Fury Road, and the fantasy trilogies Lord of the Rings and The Hobbit. In an industry that relies on intellectual property and digital distribution, Dilley is responsible for the studio's overall security posture and risk management. He also manages the security operations center for all Time Warner divisions.

Dilley is well-prepared for the drama that unfolds in a content business built on films and television: He has spent more than 15 years managing risks, solving complex problems and developing strong security teams. "It is time to shake up the de facto standard set of security controls," he says. A thought leader who is involved in multiple security initiatives -- including a few projects with this column's author, Marcus Ranum -- Dilley's recent work has focused on data parsing, analysis, correlation and visualization. Ranum caught up with Dilley to talk about the changing dynamics behind compensating controls and "interlocking" approaches to layered security.

We throw around a lot of terms like defense in depth, but I don't think we know what we mean, or mean what we say. The industry is focused on having lots of ways of detecting malware on our systems, but we avoid the more obvious approaches, such as controlling the runtime environment. It doesn't sound like there's a lot of 'depth' to most defenses. What do you think?

Ron Dilley: I agree. It seems like defense in depth has moved from information security jargon to buzzword in recent years. To me, it really comes down to taking a holistic approach to selecting information security controls, that focuses on how controls can work together to make the whole more effective than the sum of the parts. We should stop saying 'independent controls' when we talk about defense in depth. Instead, why not focus on interlocking controls?

Your example of controlling the runtime environment is a great idea and a powerful means of resisting an attacker. That said, controlling some of the runtime environments is not enough. We need to either control all of the runtime environments, or segregate the ones that we can control, then limit access for the ones that we can't control.

Can you give me an example of a set of compensating controls? I like the model where the first control provides protection, and the compensating controls offer more protection, and detection of failure in your first control. How does something like that get arranged in practice?

Ron DilleyRon Dilley

Dilley: It is less about duplicating controls like two brands of antivirus or two layers of firewalls and more about picking controls that support or enhance each other. To use your example of controlling the runtime environment, I really like the simplicity and effectiveness of application whitelisting. But it is 'really hard' to deploy in most organizations because desktop environments change too frequently.

On the other hand, if you 1) use a system management platform to control your desktops, then 2) limit administrator access to reduce the variability created by end-user-initiated desktop changes, and 3) include application whitelisting as one of the services installed by the system management platform, you remove what makes it really hard. This combination is an example of interlocking controls. And, arguably, you could add segmentation as a fourth control to allow for segregation of 'controlled' runtime environments from 'uncontrolled,' further enhancing the effectiveness of all four controls and your overall security posture, at the same time.

So often, when I talk about this stuff, someone whines, 'but that's hard' … as if getting 'owned' constantly is easy and doesn't come with its own associated costs. How do you sell your kind of model to management?

Dilley: The bane of security practitioners across the globe is the misperception that implementing reasonable security controls is hard relative to suffering through the level of effort to support unreasonable security. And this is not just the cost of responding when a control fails, but also to support the business, IT and operational organizations day-to-day when things need to get done in a safe way. Helping organizations to understand this is the key. Granted, there may be contractual or regulatory obligations that could trump objections about difficulty or cost, but the most effective and long-lasting way is to educate management on the benefits of maintaining reasonable security. That education needs to include insights about the nature and cost of real-world threats; strong messaging about reasonable security being a journey, not a destination; and the business advantages that can come with a strategic and holistic security program.

One other aspect I like about interlocking controls is that your backup or compensating controls become error detection and, thus, policy violation or intrusion detection. We were talking about how, if you use privileged access management (PAM), you can then whitelist administrative logins to the system's address set. Attempts to log-in as an administrator from somewhere else are almost certainly not your actual administrator because they're not following the program. In order to detect policy violation, you have to first define what is and isn't appropriate. That seems so hard!

Dilley: Security-control effectiveness tends to degrade over time, due to the constant changes in technology and threats. Additionally, the bad guys are always looking for a way over, around, under or through those controls. A lack of error detection, as you put it, can greatly increase the velocity of that degradation. Unfortunately, not all backup controls make good error detectors. I take advantage of pairing compensating controls that do offer that attribute wherever I can.

PAM that uses some form of authentication gateway or proxy exemplifies this [approach] and shows that what seems so hard in a cursory discussion can be dead simple. First, the PAM proxy can force all authentications to originate from a known set of systems and addresses. This creates an authentication canary of a sort, as you mention. Any attempted administrative authentication that comes from someplace else is either someone who needs to be educated and added to the PAM or a bad guy. The former will dwindle to zero as the PAM migration progresses, leaving a very appealing signal-to-noise ratio, which you know that I absolutely love to have in any error detection system. Rolling out this type of PAM is not hard because it only impacts administrative staff, which understands technology and can be easily trained-up on the new PAM system.

An interesting enticement to get admin staff off the fence and on your side is that once you have implemented this type of control, administrative credential changes can be automated and frequent. This makes it harder for the bad guys and easier for the administrators. That said, when I can't pair up information security controls this way, I make sure that error detection and error correction are included as a mandatory requirement and attribute of each control.

You and I have discussed many information security control pairings that provide this type of error detection. Other than PAM, our chat about pairing endpoint security controls like antivirus, application whitelisting and advanced malware detection agents with system management was fun. The agents provide error detection for activities that can break a system's controlled configuration; and conversely, the system management service helps to ensure that the agents continue to run effectively.

Do you keep any kind of metrics regarding security outcomes? I'm not trying to get any embarrassing details from you, but can you say anything more than just 'it works'?

Dilley: Metrics is another one of those interlocking controls that provides error detection and to be honest, metrics that highlight the errors are much more interesting. I am also reminded of one of your quotes in the talk [you gave] when we first met: 'The frequency of the unimportant can be interesting.' How many information security controls have you ever come across that were both effective and lacked some form of measurement and reporting?

You've accomplished major change, in a field where a lot of us probably think that major change means getting a significant percentage of system administrators to use two-factor authentication for privileged access.

I'm trying to interlock your answer to the preceding question with this one: Was it worth it? When I see companies bleeding gigantic amounts of money dealing with incident response and breach notification, I keep thinking 'whitelisting doesn't look as hard as dealing with that!' Or 'maybe system administrators wouldn't complain about centralized two-factor authentication if they had experienced a disaster like that.' How do you make the case that it's worth it?

Dilley: Don't insult two-factor authentication for administrators; having it is a great security control and we should all push to make it a best practice.

The way to get to major change is to avoid trying to boil the ocean. If every security control change is really an incremental minor change, then it can be achievable without unreasonable effort or cost. It is important to invest the time to communicate that. Making sure that your stakeholders understand that the information security controls that you are implementing were specifically selected to allow for incremental benefit as each portion of the change is completed can really help. It also allows incremental reduction in the risk profile.

Reasonable security is not an end-state, so I don't know if it is possible to ever get to a place where you can plant a flag and say we are done, tally the cost and shout, 'It was worth it!' Instead, it is a journey where we should be constantly reviewing where we were, where we are and where we should be heading as requirements and the threatscape change. Part of that process is determining if we need to change direction, velocity and resource requirements. It can be very satisfying to hear your senior executives explaining to their peers that information security is a process, not an end-state.

Next Steps

Why you shouldn't overestimate a layered security strategy

What does a modern defense-in-depth architecture look like?

Compensating controls may help cloud compliance

This was last published in March 2016

Dig Deeper on Risk assessments, metrics and frameworks

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Are layers of independent security controls (defense in depth) an effective strategy for today's enterprises?
Cancel

-ADS BY GOOGLE

SearchCloudSecurity

SearchNetworking

SearchCIO

SearchEnterpriseDesktop

SearchCloudComputing

ComputerWeekly

Close