In February, the security firm Mandiant Corp. confirmed, with plenty of hard evidence, what we've known for a long time: Chinese cyberespionage is staggeringly rampant. From the Aurora attacks in 2009 through the spectacular RSA token hack of 2011 to the ironically, self-described attacks on the computer systems at The New York Times in 2012, state-sponsored cyberespionage has been constant news for years.
Every revelation comes with a renewed beating of the cyberwar drums. Given that today's existing defenses and countermeasures have proven largely ineffective in thwarting these attacks, many otherwise sane people have discussed the idea of going on the offensive and "hacking back" by booby-trapping honeypot data or setting loose malicious software. Distressingly, this sort of cyberoffense is being repackaged -- and camouflaged -- in a clever and, ironically, "newspeak" way under the rubric "active defense."
Let's get this straight up front: Active defense is irresponsible. We will never vanquish a cyberenemy by going on the offensive (unless we involve our impressive kinetic capabilities). The problem is we all live in glass houses and should avoid rock throwing.
The only alternative is to do some heavy lifting and investing in security engineering, software security and building security in. We have to build our cyberhouses out of something other than glass. Leveraged properly, security engineering serves as a real deterrent in our otherwise steady slide toward cyberwar (see: Cyber War is Inevitable (Unless We Build Security In).
Cyberespionage is not cyberwar
The Mandiant report is well worth the time it takes to read it. Chock full of data gathered over multiple engagements (141), the report makes a solid, evidence-based argument: Many cyberespionage attacks have been perpetrated by the Chinese. (Note that Mandiant tracks other hacker collectives including the Russians and the Eastern Europeans, but China is the easiest to vilify because of the well-publicized attacks attributed to the Chinese since 2009).
There is a reason that the Romans designed spears to be thrown once: If you throw a rock at your enemy, do not be surprised to find it thrown right back at you.
That Mandiant chose to publish its evidence is commendable. When the "Chinese Hackers Infiltrate New York Times Computers" story was first reported by the Times in January, Mandiant pointed the finger at the Chinese military. At the time, the Chinese Defense Ministry issued this statement: "It is unprofessional and groundless to accuse the Chinese military of launching cyberattacks without any conclusive evidence." The Chinese asked for evidence, and they got it.
If a forensic computer security firm like Mandiant can triangulate one particular set of espionage attacks to the Chinese military, why can't we do the same thing during a cyberwar incident and go after our enemies with impunity?
The answer may surprise you.
Remember, the attacks that Mandiant forensically "reversed" in painstaking detail started in 2006, with the longest single intrusion lasting for four years and 10 months. The forensic effort likely took weeks, or months.
There is time for this kind of careful analysis in a cyberespionage incident involving information extraction. That helps with the thorniest issue in cyberwar attacks: the problem of attribution. Mandiant gathered lots of evidence, and it took great care to untangle tricky and misleading paths (ironically, using low-skill Facebook logins along the way).
Here's the bad news. A cyberwar will not unfold over years, months or even days. A cyberwar attack is likely to unfold over minutes, seconds or split seconds. Cyberwar attacks will happen at super-human speed. (Of course, cyberespionage and APT attacks may well help to set the stage for a cyberwar attack, but we'll ignore that for now).
About the [In]security column:
This monthly security column by Gary McGraw started life in print in IT Architect and Network magazines and was originally called “[In]security.” That was back in October 2004. The column then transitioned into Web content at several publications before finding a home at SearchSecurity. You can always find pointers to the complete [In]security series on McGraw’s writing page. Your feedback on the column is greatly appreciated.
Imagine a cyberattack against the power grid. By hacking in and controlling about 50,000 smart meters, intentionally causing a 300-megawatt stability problem in the grid is well within the realm of possibility. Properly carried out, a stability problem like this could destroy key transformers in the grid, causing permanent damage that would take months, or years to repair.
During the fog of war, an attack like this could unfold in seconds, and it may not be possible determine the perpetrator. Forensics takes time on the Internet, and there is simply no time in the ongoing war game to determine the source of cyberwar attack scenarios.
In the end, it's clear that cyberespionage, though reprehensible and certainly worthy of response, is not the same as cyberwar. (I make this point as often as possible in my work on cyberwar, as evidenced by my recent appearance on MSNBC's "Up with Chris Hayes.")
It's critical to emphasize that attribution through forensics is vastly different from active attribution during an actual attack. Time frames are diverse enough that war and espionage must be teased apart.
Active defense is irresponsible
Any active defense strategy is going to involve the use of a security hole that is exploited on the original attacker's system. If you want your "hack back" to succeed, you have to have something to hack.
Washington is all a buzz about active defense, mostly without thinking through what it really means -- or just how ridiculous it is philosophically. (Policymakers are not technologists so we have to be patient with them; but, unfortunately, many technologists are hucksters and that is just a crying shame).
In order for active defense to work, somebody needs to find a security hole (most likely in software) and develop an exploit for that hole. Then, get this, they need to keep the hole secret so that the exploit they just developed continues to work.
I'm not talking about a configuration error on the attacker's server or a network firewall problem or some failure to patch. I'm talking about a real software vulnerability.
Sticking with the espionage scenario, imagine a situation in which a booby-trapped active-defense file is placed in a honeypot for an attacker to take. The active-defense file is designed to exploit a hole in whatever software is used to process the file. So, the attacker extracts the file and uses some program to read it. A successful "hack back" requires some vulnerability in the reader program. Maybe it's Adobe Reader, Microsoft Windows or even the Java interpreter, but it's vulnerable; and the vulnerability is a vaulted zero-day exploit known only by the active attackers.
Now imagine that the attacker is smart enough to capture and isolate the "hack back" code. Ye olde zero-day exploit now belongs to the enemy. Oops! There is a reason the Romans designed spears to be thrown once: If you throw a rock at your enemy, do not be surprised to find it thrown right back at you.
More on cyberwarfare
Is the threat of Cyberwarfare in the enterprise real?
North Korea says U.S. is carrying out cyber attacks
'Cyber 911' and new voluntary standards emerge
In the end, there is this truism: The only way forward in computer security is to build systems with fewer vulnerabilities. Finding a vulnerability and packaging it up into a "hack back" system hurts everybody -- including the purveyor (or purchaser) of active defense. Finding a vulnerability, and then fixing it, is obviously the right thing to do.
Another issue is figuring out whom to "hack back." (This is the attribution problem, which only a long, painstaking forensic investigation can solve with any authority). Almost all attacks on the network involve using a number of third-party "stooge" servers as a front and, sometimes, a platform for an attack. Without solving the attribution problem, those who "hack back" run the risk of "being Gandalfed."
Why not hide behind a common enemy of the nation-state or a corporation you're attacking? Attackers have been doing it for decades.
Finally, if active defense does not involve "hacking back" and getting outside of your own network, but rather simple intrusion detection, then we should call a spade a spade and cut the doublespeak. If "active defense" is just intrusion detection and monitoring of self in real time, then it's the 'same old, same old' warmed over with a new sexy name. Yawn.
What is Washington to do?
Sadly, the government's approach to cybersecurity is as anemic as it is bureaucratic. FISMA checklists may help drag slow-moving agencies into the '90s, but they are certainly not cutting new ice. Compared to setting up and watching perimeter defenses (which is exactly what operational network security does), active defense sounds way cooler. Plus the Department of Homeland Security only recently started figuring out where the government is connected to the Net, so they are easily distracted.
President Obama's leadership on the issue is appreciated, but at the same time underwhelming. More specifically, his cybersecurity executive order is vague and does not address how to build secure infrastructures. It talks about Frameworks and asks NIST to create more paperwork. That's too bad. We need more specific and actionable leadership here.
Driven by the realization that firewalls, basically, don't work and that the perimeter has dissolved as we embrace the cloud, people looking for the answer, who don't know any better, may well embrace active defense, warts and all.
That would be a shame because we're sitting here in our glass houses talking about rock throwing again. That won't end well for anybody.
About the author:
Gary McGraw, Ph.D., is CTO of software security consulting firm Cigital Inc. He is a globally recognized authority on software security and the author of eight best-selling books on this topic. Send comments on his column to [email protected].