Security Researchers in the Open Source Ecosystem
In his book Geekonomics, David Rice does a great job of quantifying and analyzing the costs of insecure software. Insecure software is everywhere and it creates staggering problems, not just for the enterprise, but also for individuals. One of the great causes of insecure software is the fact that not enough testing is done before software is released. Often times developers rely on end users as product testers, reporting bugs as they find them. In the rush towards the latest and greatest features debugging and testing are often overlooked, resulting in security flaws.
Open source software is just as guilty as their commercial counterparts in this respect. Different open source products handle the problems distinctly so it is difficult to draw parallels across the entire product space. The way OpenBSD handles security vulnerabilities is dramatically different from the way that Drupal does. Many times, the mark of an open source project's maturity is the way in which it handles security bugs and vulnerabilities.
Security researchers benefit greatly from the access they have to source code when doing research on open source products. This access allows them a greater level of analysis than is available with closed source solutions. Security researchers can delve more deeply and thoroughly into open source projects because they can read the source.
When an open source researcher finds a vulnerability in an open source project they become part of that projects ecosystem. Often times open source projects rely on their end users, and researchers, for security and bug reports in much the same way as commercial vendors. The essential difference is with a commercial product the end user has paid for the software, and part of that payment entitles the user to a secure piece of code. With open source software there is no such explicit relationship between the developers and users. Because of this unbounded relationship, confusion and misunderstanding often results. Researchers can point to their own efforts and extol them as efforts to help secure the product, thus making them members of the project community. Developers, on the other hand, may view the researcher as nothing more than an end user reporting a problem. This disparity in perception often leads to conflict.
Open source researchers are often admonished with the principle of "responsible disclosure." This principle states (in a nutshell) that the researcher should work with the vendor, keeping results of their research secret, until such time as the software developers release a "patch" that corrects the vulnerability. While this may be appropriate for a closed source project, attempting to bind a researcher to this sort of behavior in an open source project may be more difficult, precisely because there is no binding agreement between the software distributor (the developers) and the consumer (the researcher). In fact, under most open source licenses the developer actually owns the source code just as much as the developers. Thus, it stands to reason that if a researcher found a problem in their own code they would be well within rights to tell whomever they chose about the problem. If the researcher takes the time to developer a patch of their own to address the issue they may feel even more entitled to share their research openly.
Developers, on the other hand, must struggle with the needs of their consumer community as a whole. Handling security vulnerabilities is a very tricky affair. The developers must carefully tend their communities in order to maintain them and releasing a constant barrage of security alerts and upgrades is likely to alienate their community or promote a sense of apathy towards the updates.
Even more difficult is the fact that the open presentation of security vulnerabilities often hurts a project's reputation. A project that is the subject of regular security vulnerabilities is perceived as fundamentally insecure. A potential consumer evaluating several competing projects may be persuaded by a simple tally of the number of reported security problems.
All these factors tie together to create an uneasy symbiosis between security researchers and open source developers. Developers must understand the motivations and desires of the researchers and researchers must take into account the demands and responsibilities that shape the decisions of developers. Of course, in each project and with each researcher these motivations and drives may vary.
Often times security researchers who devote their time to open source projects seek only simple rewards. Recognition of their research in the form of links to their sites may be enough. Inclusion in the project community can be another motivating factor. In the end most researchers desire some form of acknowledgment of their contribution to the security of the project.
Developers have a much more complex set of demands to juggle. Developers often would rather work on new features or develop new software than clean up bugs in existing code. Additionally the developers must consider the needs of the entire community when addressing security vulnerabilities. They must weigh the seriousness of the vulnerability against the negative impact of releasing a security patch. Additionally they must consider the negative consequences of ignoring a reported security vulnerability. If users of the product become compromised as a result of an un-addressed vulnerability the reputational cost to the project would surely be high.
In the end, each player in the open source project (the developer(s) and the researcher(s)) must understand their relationship with respect to the project. Fostering an environment of cooperation and inclusion is key to achieving a desirable outcome. When researchers feel recognized and included they are much more likely to respect the requests of the developers. Transparent communications go a long way to achieving this basic goal. If the researcher feels they are communicating with a disorganized, or inconsistent entity it can be frustrating. Additionally if the researcher feels they are being excluded from the community their incentive to work towards the increased safety of the project is severely diminished. Although it may be a hassle for developers to deal with researchers it is critical to the success of open source projects.