Published: 01 May 2014
Penetration testing has certainly had its critics over the years, and Marcus Ranum was one of them. He admits that some of the reasons were philosophical and that in the real world "complicated problems like security assurance aren't that easy to simplify." With an open mind, he sits down with Georgia Weidman, founder and CEO of Bulb Security LLC to learn more about the valuable roles beyond hacking that pen testers can serve as outsiders paid to help enterprises plug security holes in their networks.
In addition to hands-on security training, Weidman has researched smartphone security for a hacker-minded project funded by the U.S. Defense Research Projects Agency's (DARPA's) former Cyber Fast Track program; and she is the author of Penetration Testing: A Hands-On Introduction to Hacking from No Starch Press.
Marcus Ranum: Georgia, thanks for taking the time to chat. It seems that other than the warm and fuzzy 'Well, we had a hacker look at it!' mindset, a pen tester can serve a valuable role as an outsider's eyes looking in and offer advice for improvement. I know you don't just go into a client's network and write a report that reads: 'Bwaaahahahahah! Gotcha!' Can you give me an idea of how much time you spend helping your customers improve their defenses and design? What's the breakdown between consulting and breaking and entering?
Marcus RanumCSO, Tenable Security
Georgia Weidman: Sometimes I get clients who just want a pen test because of a regulatory requirement or they were acquired by a company that requires it. These clients just want to check the box next to 'pen test' and move on. But I do get customers who are interested in running a more secure operation. I work with a lot of small businesses who naturally have a limited budget for security. ... For me, I see the pen test as a baseline, a starting point in improving their security posture, because as you mentioned a pen test report that just says, 'Bwaaahahahaha I got in!' and has at best cookie-cutter remediation advice that is not really helpful to anyone.
For example, if the consultant who built their publicly facing website is abreast of secure coding practices and I find only minor issues on the site, but I find default and easily guessable passwords all over the enterprise, the client would be better served investing in improving password management than buying a Web application firewall. ... Just dropping in expensive security boxes and leaving them alone does very little to improve security. It was reported in the recent Neiman Marcus breach that the attackers set off 60,000 alerts in the intrusion detection system. Without someone knowledgeable manning that IDS, it did them as much good as no security program.
I like to work with my clients to build policies and buy products in ways that will really make an impact on their security posture. If I find a lot of default or weak passwords, I show them password-cracking tools and techniques that attackers use; IT staff can learn them in a day or two at most. If they have a few Linux systems in their environment that aren't under load, we can set a password cracker up and let it try to crack password hashes from the environment to find the weak ones.
Or, if they have a lot of low-hanging fruit, like missing exploitable patches, I might recommend investing in an affordable vulnerability scanner. It would be more economically feasible for the client to buy a license, let someone on the IT staff spend a few days learning how to use it, and then run it periodically to find and remediate these easy issues.
Pen testing is more expensive, and you want your pen testers to be able to focus on complex issues that take critical thinking and skills to turn [hacking] into a compromise, rather than just getting easy wins with network-facing vulnerabilities or default passwords.
Another thing I like to work with customers on is staffing. A small company often can't afford to pay salaries for a bunch of people just to do security work. Luckily, security is pretty hot right now, so if you ask, 'Who here would be willing to work a little harder and take on more responsibilities in exchange for getting training and experience in information security?' A lot of people will jump at the chance. They realize that these skills make them more marketable.
So, if possible, I try to make the actual test just a piece of the
consulting package, guiding the client towards higher security awareness and the best use of their security budget to build a more mature security program.
It has been a long time since I did anything close to a penetration test -- in those days, I called it a 'design review.' In the late '90s, I'd make a lot of detailed or high-level suggestions and, fairly often, nothing really happened. Do you find that the trend toward pen testing as part of a compliance audit has shifted the playing field in the right direction?
Weidman: Again, it really does vary from customer to customer. Sure, I've been to some places and given them all these recommendations for short- and long-term remediation efforts, and then I come back the next year and all the same issues are still there. Or worse, they've deployed some new systems that introduced even more issues. Security is complex, but I try to make sure all my clients know going in that the pen test is a complete waste of money if they don't invest resources into acting on the recommendations.
It's important to note that fixing individual issues doesn't fix the problem. For instance, if I find that all the browsers in the enterprise are out of date and are subject to known vulnerabilities, naturally the customer will need to spend some time updating all the browsers in the enterprise. But this by itself isn't enough. What really needs to be addressed is why browsers aren't being regularly updated as part of the enterprise's patch management program. If this is addressed, any new issues that are discovered will be fixed automatically by the customer's security program.
I think this is an issue that is not being conveyed well by a lot of pen testers. The assumption is that customers understand how security vulnerabilities work, when in reality they probably know as much about our business as [we] do theirs -- that is, not a whole lot. So again, it's a continuum. You get people who really don't care and want to check a box, all the way up to working closely with a client to develop their security program.
Georgia Weidmanfounder and CEO, Bulb Security LLC
Where would you say the most consistent problems lie? Offhand, I'd expect application security on websites to be number one, followed by configuration management failures on critical systems.
Weidman: Yes, you are right, websites are a big one. On a lot of tests that becomes my way in [from] dumb stuff like a default admin password on a Drupal install, to custom-coded stuff with command execution that I can escalate or SQL injection to get your database entries. Personally, I think it's a lot harder to test custom websites than a network. Sure there's a set of query strings you should try if you run into a database connection on an app, a set of queries strings if you find a place that stores user input [and so on]. But having exhausted those [possibilities], I always get nervous that it's that thing I didn't think to try that would have popped up as opposed to, say, an off-the-shelf product like an FTP server, where either there's a known bug in it or there's not; either I can guess credentials for it in the testing window with my wordlist or I can't; either there's something interesting in the FTP folder or there's not. No one is expecting me to find a zero day in the FTP software on a two-day pen test, but that's expected when you're testing custom Web applications.
I just think custom applications are a harder problem to solve, harder to test well and even to take metrics on your own testing skills. I suppose it's safe to say that the Web application penetration testing teams at Facebook, Google [and others] are no slouches, and yet they have given away bug bounties. As websites become even more complex, it naturally becomes even more complex to try and secure them. Even with a solid security program, the website can be a major liability.
But just as often -- if not more often -- I get into their infrastructure through phishing attacks and social engineering. It just takes one person to click on it before someone figures it out and sends out the 'don't click this' email to the whole company. In those five minutes or so, I'm probably already domain administrator. I've never run a phishing attack that has had a 0% success rate: Someone opens the attachment, someone enters their credentials, someone runs the Java applet or someone clicks through the SSL certificate warning. You name it, there's someone in the enterprise who is not paying attention, or who doesn't have the security-awareness training. I've been guilty of this myself multiple times.
In fact, I recommend that everyone, even security-conscious people, count how many ‘insecure' things they do every day, such as clicking through an SSL certificate warning because you want what's on the other side or click install updates later because you are doing something right now that is more important than restarting to install updates. … Poor security practices like this are not going away.
Please tell me that the greatest areas of weakness aren't everything?
Weidman: I think as long as passwords are the primary means of authentication to systems, it's going to be a problem. Unless you've got two-factor authentication, as far as I'm concerned, passwords are doomed. … If you walk into a parking lot of cars with a handful of keys, you have a pretty good chance that at least one of those keys will start at least one of those cars. Why go through the trouble of smashing windows and hot wiring?
Of course, BYOD isn't all that new: We've had contractor laptops, rouge wireless access points and the beloved game console in the company break room for ages. It's just now become trendy to worry about it.
And sure, we've got our mobile antivirus and our enterprise mobility management and all these other fancy-sounding terms, but who is actually testing whether they work? What if the user is malicious? What if someone else gets physical access to the device? What if there's a malicious app on the device? What if it's rooted or jailbroken?
Researchers have demonstrated retrieving plaintext of sensitive data protected by mobile device management software, and I have yet to work with a client who wanted to test for these sorts of scenarios. Within the security industry, there hasn't been a big media bonanza about a breach that could be traced back to mobile.
There's been some good research about that, too. Say an attacker gets a dollar for every successful compromise, what do they go after? Java, Flash, Internet Explorer, Windows 7 and, so on. ... It's not like it's significantly any harder to hit WebKit exploits -- the WebKit extension on mobile devices -- than Java on a traditional computer. If the password attacks fail, and the social engineering fails, and the website is hosted offsite -- and all the users are using tablets and their mobile phones for the majority of their work -- put security testing policies in place for these things now, rather than later.
Overall, would you say things are getting better, or worse?
Weidman: That's hard to say, really. There are a lot of smart people looking into this whole security thing. There are vendors who are taking security a lot more seriously. For instance on the [first] iPhone, the browser ran as root. And now iOS has one of the strongest security postures around, and they have to bring out the top exploit developers on the planet to jailbreak it. But then I see Charlie Miller [Apple hacker] slam on the brakes in a car with his laptop and I remember that with more complexity -- more networked devices that respond blindly to input -- the more critical and dire this whole security game becomes. It almost becomes a security through obscurity sort of thing.
I teach introduction to exploit development and we start with the basics -- pre-address space layout randomization, pre-data execution prevention and no sandboxes. Fifteen minutes into class, students are looking at memory and seeing what a simple buffer overflow is all about. The natural question my students have is, ‘Why didn't everybody on every street corner write a zero-day on Windows XP SP2, if it was really this easy?' But as people love to remind me, the debugging tools that make it so much easier to do the brunt of the work like Mona.py and IDA Pro weren't around.
I feel like it's kind of the same thing with a lot of the embedded technologies we see joining the Internet today. I've heard it rumored -- I haven't gotten around to testing it myself -- that a lot of the mobile modems in the phones fall to the simple stack-based buffer overflows of the Windows XP days. And then you've got your cars going online and medical devices and ATMs like in Barnaby Jack's world. [The well-known hacker died in January.] But the skill level to test or attack this stuff is still enormous, because it's not like you can just hook up an insulin pump to Immunity Debugger in Windows; it's a bit more complicated than that.
And that allows manufactures to get away with not even using basic security practices until the talented security researcher, who has the know-how in embedded devices gets around to uncovering what a mess it is underneath or, on a more sinister note, a malicious attacker reverse engineers the technology and uses it for evil.
So I guess to sum up my rant, things are getting better because more people are taking security into account. … More people are working in security -- doing testing and doing research. And programs like the DARPA Cyber Fast Track program and bug bounties are making it possible for passionate people -- who might not be able to get past HR at big company X, despite their skill levels -- to do this sort of thing and still feed themselves. Likewise, more enterprises are taking security seriously by having security policies in place and doing regular security testing.
But it's also getting worse because we are getting more and more dependent on constant network connection. I saw an ad in an airline magazine the other day, 'Coming soon: Wi-Fi on transatlantic flights,' and all I could think was, ‘Great my only excuse for not answering my email right away is gone.' More and more of our devices are connected in some way. You can get a car that's online, you can get a medical implant that's online, and you can get a door lock for your house that's online. Those are all things that can go really bad in the wrong hands.
About the author:
Marcus J. Ranum, chief security officer of Tenable Security Inc., is a world-renowned expert on security system design and implementation. He is the inventor of the first commercial bastion host firewall.