Cody Pierce knew right away what he had found, but he wasn't exactly sure how serious it was. Pierce and his fellow researchers at TippingPoint had spent much of the early part of last year poking around in the ActiveX controls in Windows XP, looking for controls that might be vulnerable.
The team had decided at the beginning of the year that with all of the applications and code now running on the Web instead of desktops, ActiveX would be a prime avenue of attack for hackers in the coming months and years, and they wanted to get there before the attackers did.
Now, after weeks of methodical research and a number of false starts, Pierce had found exactly what he'd been hoping for: a zero-day vulnerability in Internet Explorer that allowed arbitrary code execution. For security researchers, identifying a zero-day is as good as it gets. It's the digital equivalent of making the first run of the morning on fresh powder. But finding the vulnerability turned out to be the easy part in this case; now came the frustrating process of constructing a working exploit.
"Developing working proof-of-concept code is a very complicated process," Pierce said. "It takes longer than the discovery of the bug a lot of times."
Finding software vulnerabilities is tricky business. Pierce and other researchers say the process often involves hours and hours of boredom punctuated by moments of sheer elation. Even when researchers know exactly what they're looking for, the process of finding the vulnerability, confirming that it's exploitable and developing a working exploit is an arduous and time-consuming one. To help illuminate the lifecycle of a zero-day vulnerability and give security professionals an idea of how many working parts are involved, Pierce gave SearchSecurity.com a detailed look at the way he and the TippingPoint team handled the discovery and disclosure of a particular flaw in 2006.
Automatic for the people
Having spent his fair share of time digging through code for vulnerabilities, Pierce decided that if he was going to be spending a lot of time hammering on ActiveX controls, he wanted to automate as much of that process as possible.
To do this, he built a custom fuzzer to test large numbers of ActiveX controls and separate the wheat from the chaff. He wrote the fuzzer using the Python and Ruby programming languages and began looking for remotely exploitable vulnerabilities that posed a serious threat to Internet users.
"There are 4,000 ActiveX controls on a typical XP machine and I looked for the ones that could be loaded in Internet Explorer," Pierce said. "Then I looked for the ones with problems and then the ones that were critical. I wanted to see what was exploitable and what was just a denial of service."
Soon enough, Pierce began to focus on a particularly problematic component, called the Internet Help Control, which Pierce said stood out as being very exploitable. The problem with the control arises when the value of the "image" property is intentionally set to a malformed value, which causes a memory corruption. That in turn enables the attacker to run arbitrary code on a vulnerable machine. Exploitation of the vulnerability turned out to be relatively easy, Pierce said, requiring only that a user click on a malicious link. Now, the race was on to get a patch out to users before attackers found the vulnerability and began using it.
In the past, this is the point in the process where things typically broke down. Software vendors are notoriously testy about the security of their products and typically have not reacted well to researchers pointing out flaws, regardless of the discoverer's motives or methods. Various industry efforts, such as the Organization for Internet Safety and the publication of the RFpolicy, have brought some order to the way that researchers disclose vulnerabilities in the past few years. At the same time, some vendors, most notably Microsoft, began putting together rigorous, defined methodologies for working with researchers.
As a result, whereas interactions between researchers and vendors had been tense and in many cases hostile, the introduction of structure to the process has helped normalize relations between the two sides, researchers say.
"I certainly believe interactions have improved. It used to be like a mailbox, where you find a vulnerability, you drive by and drop it off and keep going," said Danny Allan, director of security research at Watchfire Corp., based in Waltham, Mass. "The thought leaders have it figured out. Researchers want to know that their work is being taken seriously and not ignored."
Ivan Arce, chief technology officer of Core Security Technologies in Boston, agreed, but said that the process still isn't as efficient as it should be.
"Microsoft still isn't as transparent as people would think," Arce said. "Things don't really happen according to their guidelines all the time. The one thing I'd like to see on a big scale is more transparency in the whole process. I'd like to see vendors provide technical details to the user community. It doesn't solve anything without technical details. The bad guys don't need that, because they can figure it out on their own by reverse engineering a patch."
In the case of the Internet Help Control flaw that Pierce found, the process worked well. TippingPoint notified Microsoft of the vulnerability in late April 2006 and worked with the Microsoft Security Response Center to reproduce the problem. Microsoft released a patch for the vulnerability in August.
Web applications add complexity
The process that Pierce and TippingPoint went through with their zero-day discovery was typical of how the bug-finding process works in most cases, but it is becoming much more complex and muddy these days. With more applications now on the Web and more code running in hosted environments, as opposed to users' desktops or corporate servers, researchers are having a much harder time getting access to the applications they want to test. There's no practical way to test an implementation of Salesforce.com or other hosted application without it being seen as an active attack, Allan said.
"I don't know how we're going to over come that," he said. "The only people who are interested in doing that without permission are using it for zero-days."
It's unlikely that the research community will get much sympathy from software vendors on this front, but given the ingenuity and resourcefulness of researchers like Pierce and Arce, it won't be long before they come up with workaround.
"We don't test Web applications without express permission, but there are people out there doing it," Arce said. "People find a way."