Centuries of literary works and real-life scenarios have depicted the battle of good versus evil: God versus Satan;...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
Hamlet vs. Claudius; Darth Vader vs. Luke; Cubs vs. Sox; NBA players vs. NBA owners. In the security community, it’s the Security Researcher vs. the Corporation. The reality is that in most of these scenarios, the characters are simply flawed or have differing agendas, neither of which are truly good or evil.
I have been following the security community for 15 years and I have seen the good, the bad and the ugly associated with researchers releasing their findings and the corporate responses. It is a point of constant conflict in the industry.
A few cases have led me to question whether computer security researchers can go too far and whether the companies they research really are the bad guys: George "GeoHot" Hotz “mod’ing” the Sony PlayStation console; Patrick Webster vs. First State Superannuation in Australia; and Charlie Miller vs. Apple. In each case, I draw an analogy to buying a riding lawn mower. I fully expect if I buy a riding lawn mower I am within my rights as owner of the product to “mod” (modify) it to the point where I can race it and win. Providing I do not harm others who buy or use the same product and I use the product within the law, this is my right as owner. I also fully expect if I buy a riding lawn mower it will not have a series of conditions created during the manufacturing process, unbeknownst to the manufacturer and outside its typical QA process that will cause me to inadvertently drive it through my picture window. Finally, I do not expect someone to sneak into the store where my mower is sold, soup it up without my knowledge and wait until I drive it through my picture window in the spirit of proving it can be done.
In the case of George Hotz and Sony, I believe consumers are well within their rights to “mod” any product they purchase for whatever reason they see fit. This is what you pay for, you own the product; this is not a case of stealing the code and reselling it, but of retrofitting it for your needs.
In the case of Patrick Webster and First State Superannuation (FSS), I am obligated to point out the concept of “authorized testing.” Patrick was not authorized to conduct a vulnerability test against this website, but in the spirit of research he conducted testing and (inadvertently?) downloaded personally identifiable information (PII). According to The Sydney Morning Herald , he wrote a “..script that cycled through each ID number and pulled down the relevant report to his computer.”
In information security, unauthorized testing has always been a big no-no. How do we know Webster is altruistic and had no intention of using this PII for unlawful or unethical purposes? Because he said so, reported it in a responsible manner and was nice about it? FSS couldn’t possibly assume Webster was being responsible and trust him; it would be irresponsible of FSS and its obligation to its customers. FSS was obligated to determine the extent of the breach (yes, it was a breach) and address the repercussions. In this case, was it necessary for Webster to download the data to prove the flaw?
The case of Miller and Apple’s App Store is a different story. According to a report in The Huffington Post, “He created a secret application that he believed could download malware onto iPhones and iPads, and got it approved for distribution in Apple's App Store.” Thus, he circumvented the Apple QA process to prove the vulnerability could be exploited. The difficult part about this case is that Miller couldn’t test and prove this flaw without circumventing the process. While his intentions were likely non-malicious, we can only assume this to be true based on trusting him personally. This defies a fundamental principal in information security: Never trust the human element. How can we possibly change that rule for a security researcher?
Here are three simple points security researchers should keep in mind:
- Please do not expect users, society, or companies to trust you. This defies a fundamental principle of information security. Respect this principle and do not expect exceptions on the basis of research.
- Recognize there is a concept of authorized testing. It is not okay to randomly test public sites that contain others’ private information in the spirit of research. Users have rights and testing may infringe on those rights. Companies that disparage this behavior are not always wrong; they may simply be protecting their customers.
- Do not assume excessive lengths are necessary to prove a point. Be patient and provide companies with the benefit of the doubt in accordance with many of the responsible disclosure policies publicly available. If the six to eight week mark doesn’t work, then please refrain from downloading all of the data; simply run a few individual exploits and disclose the vulnerability to an organization such as CERT and let it facilitate.
At the end of the day, I do not believe either researchers or the companies they research are good or evil. Both have differing agendas and are serving the interests of those they support. We all need to respect each other’s place in society and understand we may not agree on all topics. There is no such thing as good and evil, only flawed characters who are doing their best.
About the author:
Elizabeth Martin has 15 years of experience in the information security, compliance, and risk management industry. She has extensive experience in the automotive, retail, financial, healthcare, government, and managed security services verticals. Send comments on this column to email@example.com.