Open source software security

Disclosure Revisited

15 June 2010
Computer World recently ran an article about a Google security researcher who released exploit details for a 0 day exploit in Microsoft Help Center. The article is interesting because it addresses several different problems in the modern information security landscape. Particularly, one of the read comments that was very highly rated caught my eye. The comment reads: I agree with the person that said this researcher should be fired. I am an IT professional and disclosing this publicly has put many people at risk. I also do not want Microsoft or any company in a position that they have to publish a rushed fixed and potentially do as much harm as a piece of malware because of lack of QA. Any person with programming experience should understand how hard it is to write a patch for thousands of machines never mind hundreds of millions. Shame on this researcher, shame on Google. Paul This is an age old argument against full-disclosure, the policy of releasing vulnerability details before a vendor has fixed (or patched) a problem. The argument is usually trotted out by industry representatives who complain that fixing software is arduous work that requires time. The problem with this argument is twofold. Firstly, the software patching process is not always transparent. This means that security researchers who report problems have no way of knowing if the process is taking a long time or if the vendor is dragging their heels, or in a worst case, ignoring the issue. Without clear communication it is easy for the reporter of the security issue to infer that nothing is being done. Often times vendors take months, or even years, to fix problems. This leads directly to the second problem with this argument: if a security researcher has found a vulnerability it is likely someone in the underground has found it as well. This means that in the lag time between the vulnerability discovery and report, and the time the vendor fixes the software, the vulnerability could be actively exploited "in the wild" and folks running the software have no way of knowing they are at risk. Certain products, such as Tipping Point's Zero Day Initiative, pay for vulnerabilities and add signatures to their products in order to protect their customers from exploit during this window, but folks who aren't paying for this service don't benefit. This can be particularly frustrating for folks who pay for software that is found vulnerable, who must then pay a third party to protect them from harm during the patch development window. The underground, and computer crime in general, is becoming wildly profitable. Make no mistake in assuming that security researchers are the only people who have found vulnerabilities. Although it may be a hassle to have researchers release full disclosure details, this is much better than the alternative of black hats finding the vulnerability and keeping the details to themselves. The second issue at hand is that the researcher works for Google. This is interesting to me particularly because the researcher in question, Travis Ormandy, in no way alluded to his professional affiliation in his original disclosure. The Computer World article definitely hypes this disclosure as a Google related release, but it was no way clear that Mr. Ormandy worked for Google or the disclosure was related to Google in the full disclosure posting. It particularly struck me the disclosure would be associated with Google. Obviously Google is paying Mr. Ormandy and he likely worked on the vulnerability during his time at work. Similarly, I work on vulnerabilities while being paid and on my free time. I try to release disclosures only under my personal identity. Separating personal and professional identities is difficult, and sometimes impossible. It may not be fair to conflate Mr. Ormandy's motives with Google's. I feel that Mr. Ormandy's disclosure 5 days after initial contact with Google was perfectly in keeping with a full disclosure policy and I applaud him for it. The full disclosure debate has raged for over a decade, with no clear resolution, and it's about time that we shine a new light on the issues. There is no metric data that full disclosure harms products above or beyond responsible disclosure. This, however, is the linchpin of the argument against full disclosure, that script kiddies who would normally not attack vulnerable systems are alerted by the disclosure and swarm like sharks to blood in the water. I'm sure there are instances where this does happen, but I'm not convinced that it is an inevitable outcome. I think there needs to be some scientific work in the area to back up the enormous rhetoric on both sides of the debate. Barring any scientific metric the dispute is largely academic. What I do find troubling, however, is the bombastic language used by advocates of so called "responsible disclosure." These include labelling full disclosure "irresponsible" or disclosers as "narcissistic pimps." This sort of language doesn't advance the debate and you really only see it coming from one camp, industry. Obviously it is a hassle for developers and maintainers to keep their software systems up to date and patched against flaws, but perhaps it is the developers that deserve the ire, not security researchers. Although researchers find vulnerabilities, they are not responsible for creating them. Ultimately the coders who develop the software are at fault, but they always seem to escape blame. Maybe it should be the Microsoft developer who is responsible for the Help Center vulnerability that should be fired, and not the Google researcher who found the problem.