Security is the Sexy Part of QA

29 July 2010

While listening to the Risky Business podcast recently I heard it said that "security is the sexy part of QA" (Quality Assurance). These words really struck me due to their truth and insightfulness. As much as many security researchers would hate to admit it, QA testing is at the core of a lot of what we do. This is especially true with application security. Turning over my shoulder I can pick at least three QA testing related books on my shelf right next to others about rootkits, malware, and other security topics. Part of what QA testers do is to push software through the paces, see what breaks, and report those bugs back to developers for fixes. Any truly mature software development process should include some quality assurance.

Several notable security practices have emerged directly out of QA. Fuzzing is one example. By mangling input to applications and testing responses, QA testers can often find problems. Security researchers can find things like cross site scripting, SQL injection, and buffer overflows using the exact same techniques, especially when dealing with closed source software.

QA is an arduous, thankless job that takes a lot of patience and deliberate attention. In order for QA to be successful it must follow a proscribed, documented, repeatable process so that flaws can be recreated. It would behove security researchers to follow the exact same process when testing software. This allows for repeatable tests and allows researchers to share testing methods. It would also allow two security researchers to reach the same results within a test.

Of course, some security testing, such as spotting logic flaws, do not lend themselves to QA testing methods. Some features of software are functionally correct, but present security problems. For instance, if a certain domain of information is supposed to be reserved for a specific authorization group you can't simply fuzz the application or click a lot of buttons to induce a crash and discover the problem. Instead you have to document the security architecture of the application and then test the application to ensure that it meets the architectural specification. Because no formal language exists (outside academia) for such a notation it would be difficult to implement automated testing for such security issues.

It is true though that security, at it's core, is quality assurance. As such it is important the security researchers study the methods of QA testers, and whenever possible, work hand in hand with QA testers. In certain situations security testing can easily be offloaded to QA testers, which might even make the job more attractive.

Open source projects should take special note of the close relationship between QA and security. While many projects open alpha, or beta releases to testers, it is exceedingly rare for open source projects to invite open security testing prior to release. If open source projects were to integrate QA into their beta testing there might be far fewer security flaws in FOSS. Much of the backlash against the many eyeballs = shallow bugs argument in security has been that in FOSS few, if any, of the eyeballs are those of trained security researchers and therefore security bugs proliferate. If FOSS projects actively sought out and encouraged the participation of security researchers this might not necessarily be the case.