Not long ago, being a security researcher was a glamorous and potentially lucrative way to make a living. All it...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
took was a good understanding of computers and networking, a clever handle and some intellectual curiosity and you were on your way. Publish a couple of bulletins on vulnerabilities in Internet Explorer or IIS and all of a sudden your name was in the paper and companies began calling with offers of consulting work. Your only real worry was finding enough new flaws to keep your name circulating in the right circles.
But those comparatively carefree days are long gone. There's still money to be made if you have the right skill set, but now the most important weapon in a researcher's arsenal is a good lawyer. As more and more applications move to the Web, researchers are finding that a lot of the research that yields interesting results now lands them on the wrong side of the law. Take Salesforce.com, for example. The company's flagship CRM application is hosted on company-owned servers and delivered to customers as a service. In years past, a researcher could simply buy a license for a CRM application, run it on his own machine and attack it to his heart's content. But if he tried that same tack with Salesforce (or any other hosted application), he'd likely receive a cease-and-desist order within a day or so, or perhaps even a visit from some local constables, depending on what he was doing.
Running any kind of security assessment/vulnerability scan of a Web site or Web application without express permission is now seen as a hostile action. Many talented researchers have shied away from Web applications altogether for fear of landing in court or jail. Danny Allan, the director of security research at Watchfire, said some researchers will run client-side tests on Web applications, but no one is too keen on doing server-side testing.
"With Web applications you're not testing the client, but the server and researchers can't do that unless we have legal papers running off our desk saying we can do it," Allan said. "That leaves us all less secure. There's no real oversight of Web applications. No one is doing the testing except the bad guys. You have to trust the organization. I worry about that. I worry about a day when my computer will only run a browser and the only security I have is the trust in the company."
"With local software, I can decompile it and not only check the external interfaces but the internal interfaces. With a Web app, I have zero visibility into their internal processes. Organizations are depending on Web applications that they don't own or control and the oversight is minimal to nonexistent."
This state of fear has essentially transported us all back to the days when software vendors expected customers to take it on faith that their applications were secure. This was the default attitude of nearly every vendor until the guys in the L0pht, the Cult of the Dead Cow and other individual researchers began publicizing vulnerabilities they found in commercial software and criticizing vendors for not fixing them. Many vendors simply ignored the advisories, while others decided that lawsuits were the way to go. But those strategies eventually backfired when customers began to take notice and question why the vendors weren't paying more attention to security. The best example of a vendor getting religion in this way is clearly Microsoft.
As more vendors began to work with researchers rather than against them, vulnerabilities were fixed more quickly and without all of the venom that was the norm previously. That's not to say everything was sweetness and light; plenty of researchers still adhere to the dogma of full and immediate disclosure. But, now that fewer applications run locally, we again find ourselves in the position of simply having to take the vendors at their word on the security of their applications. I, for one, do not get a warm and fuzzy feeling from that.
Marc Maiffret, chief hacking officer at eEye Digital Security, has found his share of vulnerabilities, and he believes there is still plenty of room left for good original research, even in the world of Web applications.
"There are some specific areas where things are getting harder, but still far from impossible, such as Microsoft remote SYSTEM vulnerabilities. Because of things like this we have seen a large increase in people targeting third party client applications--Adobe, iTunes, Apple--which are currently the low hanging fruit in bug hunting," Maiffret said. "Most software vendors, besides Microsoft, are years behind in their practices and procedures for securing their products and it makes it rather easy to target them for weakness. And when you think about the fact that iTunes runs on over 300 million systems, they are just as important a target as any Microsoft application."
If software companies aren't willing to spend the time and money to do source code analysis or have outside penetration testers take a run at their applications—and clearly many of them are not—then the next best thing is having independent researchers do that work for them once the software hits the market. But that's far from ideal. Of course, vendors would prefer that researchers leave their applications alone, and the way things are going, the researchers may not have much choice in the near future.
And that, despite what the vendors or some of the pundits tell you, will make us all more vulnerable. Because the bad guys don't play by the rules. They don't care what Microsoft or the Department of Justice thinks, and they're far better off if researchers aren't finding the bugs and alerting vendors. That gives them all the time in the world to attack that new zero day without having to worry about when it might be patched.