Drupal Content Access Module XSS Fun
After my latest Drupal module vulnerability disclosure (a cross site scripting (XSS) vulnerability in the Drupal 6 Content Access module) I was contacted by a reporter from a pretty big British news agency who wanted details about the problem. Given the large number of Drupal sites on the web it must have seemed like a vulnerability was a pretty big deal. I quickly responded and let him know that the vulnerability remained relatively obscure and difficult to exploit and probably wasn't worth his time but it got me thinking. Security is a pretty broad field and I spend most of my time on one extreme. Asking me about computer security and privacy is probably a lot like asking a law enforcement agent about home security - you're going to get an answer colored by experience. I look at vulnerabilities and see massive problems, where in reality these problems are probably pretty minor. As soon as I see a vulnerability I quickly discern routes to compromise that might not occur to anyone outside of the security field. I imagine the worst case scenario first, and then track back to some middle gound.
It's important to keep these sorts of perspective in mind when reviewing vulnerability disclosures and security bug reports. That isn't to say that all security vulnerabilities are minor, but one must consider mitigating factors. One should also consider the source of most security disclosures - security researchers or vendors. Each has their own perspective and agenda. Security researchers are likely to paint a picture of the internet falling apart (after all, who wants to report on a relatively insignificant vulnerability that affects a tiny population of install base) whereas vendors are more likely to downplay vulnerabilities to the greatest extent possible.
Many organizations have tried to address this very problem by ranking, or scoring vulnerabilities. These systems put vulnerabilities into some sort of threat matrix that reports them as "critical," or "low," or "minor" in terms of threat. While these systems may not always be very accurate, or discernible, they do make the attempt to mix in relevant factors into the weighing of security issues. Using these sorts of ranking reports probably provides a modicum of sanity to the evaluation process. Of course, many of the organizations that produce these reports are also self interested in some ways. For instance, a security company will probably paint vulnerabilities as more severe, whereas a software company will probably skew rankings toward the low end of the severity scale.
Looking at security issues in a vacuum deprives one of the ability to contextualize the issues. Even the most severe vulnerability might be mitigated by something as simple as a $40 off the shelf NAT device. Because of the complexity involved in computer systems, a vulnerability doesn't always translate into an exploit, and an exploit doesn't always translate into a threat to all systems.
Putting this all into context is important when evaluating any report of a security vulnerability or threat. Consider the source of the report, and consider bias that might be applied. It's very easy to skim reports and assume the worst (or the best) but it's not going to yield accurate interpretations.
All of this got me thinking about how I word my disclosures. I'm certainly guilty of a bias, just like everyone else. Should I temper my reports because I know I'm likely to skew them towards an extreme? Or, would doing this very moderation be a disservice to the consumers of my reports? By lowering the threat assessment for any particular vulnerability or couching it in terms of extenuating circumstances I might be throwing off the entire evaluation process that others might do. If the security researcher says the problem is a medium threat does that make it a low one in reality? What about a threat a researcher reports as low on the severity scale? Is that even worth looking at? More importantly, am I equipped to evaluate or quantify the extenuating circumstances surrounding a vulnerability? It's impossible for me to know the deployment details surrounding any particular vulnerability that could be mitigating, or exacerbating, a vulnerability. Is it fair for me to attempt to address this sort of unknown landscape? For now I'm afraid I don't have any answers to these questions, which leads me to believe that I should strive to be as fair and honest as possible, acknowledging that I'm probably being overzealous about severity, but admitting that ultimately end users of systems should be aware of vulnerabilities so they can make their own informed decisions.