Marcus Ranum: The other day over lunch you were telling me about a rather amazing incident that you were dealing with. I understand that you can’t get too detailed about some parts of it, but what can you tell us?
Aaron Turner: I guess the first surprising thing we found was that there were at least three different adversaries on the network, each attacking a different aspect of the infrastructure. One group was after the crown jewels—and was very focused and organized—another was picking up the crumbs of the ‘A Team,’ and another was running a payment card harvesting operation focused on the organization’s P-Cards [purchasing cards] associated with the organizations bank accounts. The most interesting thing about the ‘A Team’ is that essentially, we found evidence that this group of sophisticated attackers was using wireless communications capabilities to not only bypass the organization’s security controls, but they were doing so to accelerate the exfiltration of information from the organization. Think of it as the attackers got impatient with the organization’s slow MPLS [multiprotocol label switching network] lines and decided to do something about it.
Ranum: So what are the takeaways, here? I think that most of the organizations I know of aren’t ready to cope with radio-spectrum penetrations or penetrations carried out by agents. I know I used to imagine how many places a hacker could get into if they had a job working for a cleaning company or a phone company, but this upsets all the defensive paradigms I’ve been thinking of for decades.
Turner: It starts where we always look to start: Get a baseline of what’s going on in an organization. If we aren’t looking at a technology as a potential channel for badness to happen through it, then we aren’t doing our jobs. While this incident involved what we believed was purpose-built wireless gear, the risks posed by a planted 4G device tethered to the back of the desktop used by the CEO’s administrative assistant can do a lot of damage. Set up a couple of spectrum sweeps with a trained individual who has access to the latest software defined radios. There aren’t that many people who have the training and equipment, so look to get references from others who have already done it. Once the baseline sweep is done, then go back periodically and look for anomalies. Especially around specific high-impact business periods [like the quiet period before earnings release dates, or when a company is rumored to be part of a merger and acquisition deal]. For certain situations where an organization needs real-time monitoring, there really aren’t very many solutions out there. We’re testing a couple of platforms to determine their strengths and weaknesses, but as in all cases of immature technologies, it’s going to take time to get a real-time monitoring solution deployed that is reliable. If anyone out there wants to work with us in testing these, they should get in touch. What we’re looking for are specific scenarios where organizations want real-time awareness of either unauthorized devices emanating radio signals heading outbound with data, or unauthorized base stations trying to trick authorized devices into connecting to them.
Ranum: What are some reasonable things to do? Some of this, I would imagine, would show up via careful network analysis. But I can imagine that technology for doing this will just get better and stealthier. Do I need to start looking for a signal analysis package?
Turner: Unfortunately, buying a bunch of expensive software defined radios is not going to help you without having a trained operator. Find a trained and reliable operator with verifiable references. Wireless spectrum analysis is at the stage today where network packet analysis was when I got started in the mid-90’s. There are relatively crude tools and it’s more art than science. On the wired network, start where you can and look for ways to slice and dice your NETFLOW logs.
Ranum (interjecting): Watch your routing topology, too, I suppose…
Turner: One of the things we saw was that the attackers would consolidate information before attempting to exfiltrate it. Look for those parts of your infrastructure that are serving as vacuum cleaners that shouldn’t be.
If you have full-packet capture analysis capabilities (or are paying someone to do that for you), then begin doing periodic audits on different segments that have large amounts of unstructured data flowing over them. Do an inventory to understand why certain file shares are new versus ones that have always been there. Essentially, you’re looking for the hard-wired network clues to where the information is getting concentrated before it makes its way wirelessly out of the organization.
Also, look for allies. We’re in very early discussions with the network operators about this problem. They are interested in knowing more about this threat, but they won’t invest in it until their customers make them aware that this is a priority. In the grand scheme of things, the carriers are self-interested enough to protect their spectrum and if we can show them that their network capacity is being reduced through data theft at certain locations then we can work together to solve the problem.
The bottom line is we need to understand more of these scenarios and then pass them on to the carriers. As we’ve seen before in our profession, when people keep to themselves about a problem, it usually just festers. We need to share more information among ourselves about how to go about solving this network threat detection problem.
Ranum: The bit about multiple adversaries attacking at once gives me the “heebie jeebies,” but it totally makes sense when you think about it. I was recently debugging a real-world chemical process (in my photographic darkroom) and realized that we computer people tend to think that problems are singletons: Our systems either work completely or fail completely. In my chemical process, I had two things go wrong simultaneously that produced the same kind of failure. When I tried to isolate the problem, the redundant failures masked each other out. The lesson I am taking away from what you’re saying is that we need to also start thinking in terms of not “a penetration” but “multiple ongoing problems in parallel” and changing our problem-isolation workflows accordingly.
Turner: I totally agree that most of us in information security are linear thinkers. We set out to solve a problem and then try to figure out how to solve it through single-track approaches. Nowadays, it’s not about one service that is compromised by one attacker, it’s multiple services/gateways being attacked simultaneously by multiple attackers (or attacker organizations). The same holds true about deploying security controls. We can no longer have point solutions as our bastions. Multifactor VPN? That’s not the castle wall we once thought it was.
Ranum: I’ve been saying for years that the obvious avenue for attack is to position people on the inside, as system or network administrators. After all, they’re the people with access to the data center, the backup tapes and the locked cabinets. One thing I’ve been pondering is that, in the future, it might be an interesting career path for people who want to play on the “dark side” to deliberately embed themselves with an eye toward eventually moving on, but leaving a backdoor that could be re-sold later.
It seems to me that the only way to defend against a lot of this kind of stuff is: audit, audit, audit, configuration management and audit. What do you think?
Turner: The most disturbing case of the insider-as-attacker scenario was that of one Tim Foley (if that’s even his real name). He is the son of two Russian nationals, living in the greater DC area.
Could you imagine what kind of stuff that kid could have planted as a part-time IT support technician? Talk about taking a long-term view of the prospects. These organizations moved to the U.S., waited for their kids to get old enough to become active assets, and then started working. As the saying goes, “Only the dumb ones get caught,” and it appears that some of these folks were dumb, or at least very lazy in their craft.
If I were to focus in on one thing companies can do to improve the situation, it’s time for organizations to move to a whitelist approach for everything. Instead of relying on blacklist technologies like antivirus, it will be more important to get around the fact that only authorized code can run in an environment and only authorized network connections can be active. It’s going to have to turn our Wild West networks inside out—and severely impact our operational culture for a while—but it’s one of the few ways to proactively manage the risk, instead of waiting for a major incident to occur. Moving from permitting everything and trying to block a few things, to blocking everything and only permitting a few things, will be hard on users, but I see very few other defensible positions.
Ranum: Aaron, it’s as if securing our networks wasn’t interesting and complicated enough, already. Thank you so much for your time.
About the author:
Aaron Turner is the co-founder of the security consulting firm N4Struct. He has been working in infosec research and designing solutions to tough security problems since 1994.