User Insecurity and Open Source Projects
Recently while reviewing a Drupal module for code security I noticed an interesting circumstance that got me thinking. The module itself had several security features built in, and adhered to the Drupal secure coding guidelines. However, the module had installation instructions that directed the user to set it up in such a way that it introduced a huge vulnerability into the overall installation. Thinking about this I wondered if it should be the module developers job to prevent the sort of situation that a user could induce by following the instructions. The installation guides should obviously be amended but even doing this wouldn't prevent a user from recreating the same dangerous situation. Who should be responsible for protecting users from themselves? Should the Drupal core code base prevent such situations from even being possible? It's arguable that they should.
Preventing users from creating insecure environments is the job of responsible programs. However, it's nearly impossible to prevent a user from changing configurations in such a way as to create a dangerous environment. The power that a user is given when they control software, especially web based software installed on a server, is great. Given the rapid move towards usability, many systems that were once beyond the technical grasp of the average end user are now more readily available and approachable. This means that a whole class of users who have no idea of the security concerns facing software they are employing are using this software. It is unreasonable to expect these users to understand, appreciate, or even attempt to mitigate the security threats facing them.
Without responsible programmers placing roadblocks in the way, many systems can be reconfigured by users to make them unstable and/or insecure. However, many programmers are loathe to do this, because it limits the functionality of systems, removing the vulnerabilities, but also disabling features (after all, a vulnerability is just an exploitable feature that can be used to bypass security). This is especially true for open source software, as much of it is designed for programmers and initially adopted by programmers or the technical elite. These people accept the risks introduced by many features, but are also knowledgeable enough to mitigate them.
At a certain point, many open source projects begin to attract a class of user that is increasingly non-technical and interested mainly in the functionality provided by the software. It is at this "tipping point" that many open source software projects survive or perish. This is because once non-technical users begin adoption the development model must change. No longer is the package being created by programmers for internal consumption, instead the software is being created for end users. This is the point at which many of the dangerous "features" built into the software need to be re-examined. During this examination open source projects face a crossroads. Do they alienate the new users by sticking to the model that the software is developed for programmers or do they embrace the new consumer community? If they choose to do the latter the project is forced to address concerns such as user interface, usability, and ultimately security. While it is difficult to face each of these concerns, it is critical to do so. Re-examining use cases of the software from new prospectives is vital to creating a broadly targeted software package.