alphaspirit - Fotolia
Published: 03 Oct 2016
As the conversation about government use and abuse of cyberweapons continues -- following the Shadow Brokers' disclosure of a cache of advanced cyberweapons and reports of repressive regimes using lawful interception product Pegasus -- it's a good time to find someone with the right credentials to explain the technical and political aspects of the debate.
Nathaniel Gleicher may be the ideal interlocutor: Currently head of cybersecurity strategy at Illumio, Gleicher brings along a unique set of qualifications. Prior to joining Illumio, Gleicher was director for cybersecurity policy on the National Security Council (NSC) at the White House for three years, and before that he was a computer scientist turned lawyer who prosecuted cybercrime for the Department of Justice.
Gleicher spoke with editors at Information Security magazine about cybersecurity challenges, gave some insight into the Shadow Brokers' cyberweapons dump, highlighted the importance of having visibility into networks that need to be protected and discussed other trends in cyberdefense.
What were the major issues you faced working with the National Security Council at the White House?
One of the things that you see repeatedly is the continuing challenge to stop the march of breaches. You have the Sony breach where you have damage being caused to systems, you have other breaches where you have theft of information and you have more recent breaches like the DNC hack where you have attempts by foreign governments to actually influence public debate. But the common thread in all of them -- and a major issue that we were focused on at the NSC -- is that every single one of these breaches relies on lateral movement. What I mean is that, in each breach, the intruder gets across the perimeter at a low-value environment and moves laterally to a high-value environment where they can cause damage.
If you could make lateral movement harder, you would make it much more difficult to pull off a successful breach. One of the things we talked about quite a bit at [the] NSC is sheer number of breaches. You'll see statistics like 2,200 breaches in 2015. Those numbers, they're not that surprising because [breaching] is fairly easy. And [a] breach is actually easy in the physical world as well: Anyone could go to D.C. and jump the fence at the White House and get onto the lawn. The question is what happens when you hit the grass and the trained Secret Service agents with their canines take you down.
The breach is not the part that's hard; it's surviving and thriving once you're inside. The much more disturbing statistic for me is dwell time, which at this point, I think, it's 145 days on average that an intruder can live within a compromised system. And if it's easy to get in -- and it's probably always going to be easy to get in, at some level, for a determined attacker -- then the worst outcome is intruders having the run of the place once they get inside. If you could make that harder, that is where you start to truly turn up the dial on cost for intrusions.
It sounds like you need to build a maze in your network, with a dead end so that you can stop them.
Part of it is about actually stopping them, but anyone who tells you that there is a single security solution that can guarantee the security of a system is probably lying or is selling something.
You just want to make life harder for the attacker, and you want to make it take longer. The interesting thing is that once they get inside, they're living inside your network and every move they make risks exposing them, so they only need to slip up once to get caught.
Can you comment on the NSA/Equation Group cyberweapons dump? Should we think of Shadow Brokers as being an insider, is it likely Russian spies or is there some third option?
It's hard to tell from the outside, and there's a tendency to jump to conclusions, and I don't think we know. We've seen some interesting technical assertions that this is likely information collected from an intermediate hop point somewhere, which would suggest that it was an outsider.
Nathaniel Gleicherhead of cybersecurity strategy, Illumio
The actions of a sophisticated outsider who will make their way inside a network start to look similar to the actions of a malicious insider, and it can be hard to tell -- the major difference being you would expect an outsider doesn't know where they're going and an insider knows exactly what they're looking for. But both of them are going to be surfing through the network and taking credentials, either their own credentials or compromised credentials, and using the authority of those credentials to get the information out.
It's not 100% easy to say what the answer is … I think the notion that it's an outsider getting in is more likely, particularly if you're talking about something that was compromised at an intermediate hop point. But in some ways, if you look at the tools and techniques in place, it's remarkable how similar they could be.
Also on a lot of minds is the Pegasus exploit -- an exploit governments have acquired from cyberweapons dealer NSO -- which chains together three iOS zero-days. These cyberweapons are for lawful intercept purposes, like the one used to decrypt the San Bernardino shooter's iPhone earlier this year. Where does the U.S. government stand with them?
I assume you've been following the discussion around Wassenaar [a multilateral export agreement] and the attempt through Wassenaar to control the export and transfer of surveillance kits and other cybertools that would be used by repressive regimes to monitor human rights defenders. It's a great example of how difficult it is to separate out legitimate cybersecurity research -- which is exactly the sort of thing we want being reported to the companies and to the government -- and attempts to generate exploit kits that you can use.
Unless you know intent and usage, the tools and techniques look very similar, right? So there certainly is by governments around the world reliance on private sector institutions. I would say that the U.S. is a lot less engaged in anything like that than anyone else. But it's interesting how hard it is to tell the difference, and where does -- for example, the San Bernardino case, where you're not talking about a live exploit -- where does that fit into this continuum?
Speaking of vulnerabilities, what about the government's vulnerability equities process [the U.S. system to disclose some computer vulnerabilities and not others]? How does that factor into decisions made at the White House?
It's important to remember the Pegasus tool is not a government-developed system, so it's not something that would have been impacted by any sort of vulnerability equities process [VEP]. The more you have private sector institutions operating in this space, the more you have exploits being discovered and developed outside of government's capacity to identify them or to respond to them. You're going to see more and more spread of these vulnerabilities.
It's important to remember that the vast majority of intrusions don't rely on zero-days. By and large, our security is so open that the vast majority of intrusions don't need anything as sophisticated as a zero-day.
And yet, looking at the cache of cyberweapons -- allegedly created by the Equation Group, which has been linked to the NSA -- they attacked products that should not be vulnerable to run-of-the-mill, non-zero-day attacks. Enterprises and governments are using Cisco and Fortinet products to protect their own networks. We're hearing reports now that people are considering dropping Cisco because of this disclosure, so shouldn't the VEP be applied at some point?
It's certainly a stark reminder that no one can guarantee the absolute security of any exploit they might hold, and when you're doing a calculation process like the VEP, you can't presume that any tool, however important, is going to remain secure. You have to factor into your calculations the cost and risk of it getting exposed. The Shadow Brokers' interaction demonstrates that very powerfully, and it's a reminder that the network and cybersecurity right now is a fundamentally asymmetrical zone of conflict.
It's much easier to be an attacker than a defender, and in this case you have -- allegedly, as you said -- one of the most sophisticated attackers and most sophisticated defenders having this information exposed. The one certainty is that you cannot guarantee that anything will remain secret forever. And so if you're doing a calculation like the VEP, you have to recognize that in anything that is held back there is a risk that it could get exposed.
Getting back to the enterprise, do you have one pearl of wisdom you would tell CISOs?
Rob Joyce, who is the head of the NSA's Tailored Access Operations unit, gave a talk at USENIX in the last year; it's a great talk. What he said is, 'I'm from the NSA. We're very, very good at intrusion, and if you wanted to make life harder for us, here's what you should do.'
It was a fascinating talk. He basically said that everything comes back to one thing: A good intruder wins because they know your network better than you do. Today, it is very common for me to talk to a CISO and to have them not know what devices are connected to their network, to not know how their systems talk to each other, not know how their network runs. They know how it should run, but that's often very different from how it runs in reality.
If I was going to give a pearl of wisdom to CISOs, it would be that visibility is the most important thing, and visibility means a lot of different things, but what I would say is you need a new type of visibility.
When the attacker gets into your data center, they land on a server and they look to see where they can go from that server. They're looking for relationships they can follow, pathways they can follow. And if the defender can't see that, then we're basically trying to defend ourselves while we're blindfolded. If you can't see those pathways, then you don't know where the attacker is going, and it's hard to believe you could actually stop them. We haven't been able to see the relationships between the applications and the pathways that attackers are going to follow.
There's a saying: Attackers think in graphs, and defenders think in lists. And I would say you have to change that: Defenders have to think in graphs. You have to see the connections between your servers because if you can't see them, you can't control them. And if you can't control them, you can't stop an attacker.
Enterprise Lessons from the NSA's malware defense report
Vulnerabilities equities process and the battle to unlock iPhones
Why IT is confident about Apple's ability to protect iOS data