The topic of how the shift to Web-based applications affects the work that security researchers do has been coming up quite a bit lately, and it’s an interesting discussion to have. I wrote a column on it not too long ago, and my main point was that the days of researchers being able to sit back and find vulnerabilities in applications at their leisure are essentially over. That strategy doesn’t fly when applications are hosted on remote servers. Running exploits against a copy of Vista that you paid for running on your own machine is one thing; doing the same thing on Salesforce.com or Windows Live is quite another.
With this in mind, I was intrigued to see a post on Microsoft’s Bluehat blog by Rain Forest Puppy on this very subject. For those who don’t remember him, RFP is a well-respected researcher with long experience in the industry, and is the author of what is generally regarded as the first codified vulnerability disclosure policy, the RFPolicy. After growing frustrated with the state of the security industry, RFP dropped his public persona and stopped accepting speaking engagements in 2003. His reasoning at the time seems quite prescient given today’s climate. “…the days of free security research for the sake of free security research are numbered, if not over already.”
So when RFP surfaced at last week’s Bluehat Briefings at Microsoft, people rightly sat up and took notice. In his blog post, RFP warns researchers of the dangers of probing Web applications for vulnerabilities and posits that vendors should take into account the intent of a researcher before beginning legal proceedings.
You see, the tables have turned. Security researchers are the ones at risk now. Reviewing an installed piece of software in your own closed environment, while conceptually subject to copyright and other intellectual property infringements, is benign enough within that exact context. However, reviewing someone else’s production web site (without their permission, of course) for security problems is essentially a criminal activity. What is the real difference between looking for a vulnerability in a web site to help make it more secure versus looking for a vulnerability in a web site for malicious purposes? In the initial stages, both approaches involve the same exact technical activity/process. The only difference is the attacker’s intent—and intent is just a subjective frame of mind of a person that can easily be (mis)interpreted in a court of law.
So how to address this problem? RFP suggests, as others have, that vendors need to play a part in making it safe for researchers to work on Web applications.
Remember: the difference between a well-meaning researcher and a cybercriminal is their general intent. Therefore, a good solution to this conundrum would be to have the researcher show their intent to the vendor/third-party responsible for the target web site. Since the vendor/third-party decides whether to pursue a criminal investigation, knowing the intent of researcher could presumably change their decision to launch an investigation. While simple on the surface, there are still lots of caveats:
· The vendor/third-party must be willing to recognize the different intents (well-meaning security researchers vs. cybercriminals) accordingly
· There needs to be a reconcilable way for a researcher to establish intent to the vendor/third-party
· The method of establishing intent should dissuade cybercriminals from using the method to masquerading their true (malicious) intents
· The entire process should not interfere with or otherwise hinder any incident response processes of the vendor/third-party, which are still necessary to handle true cybercrime incidents in a timely fashion
Excellent suggestions, all. But his premise assumes that vendors are capable of reacting reasonably to these kinds of actions from researchers, and we have seen enough evidence to know that is not always the case. Still, the proposal is worth considering and hopefully some in the vendor community will take notice.