Pen Tests are Bullshit

30 November -0001

Recently I've spotted an increasingly tractable argument against pen testing emerging in the computer security industry. Articles such as Problems with Penetration Testing and Tenable Network Security's CSO Marcus Ranum's talk in Risky Business #85 are widening the dialogue about the issue. Having just returned from InfoSec Institute's Ethical Hacking training I feel pretty close to the issue. Much of the InfoSec Institute training is designed to prepare people to enter the pen testing field and so I basically spent a week observing the industry from within.

Pen testing as a tool can be extremely useful, but all too often I think it's done completely wrong. Black box pen testing is a flawed model (in a black box test the pen testers attack the system without any privileged information and simulate actual attackers). In a black box pen test the testers are incentivised by causing the client to lose. They hunt around for exploits, and when found, leverage them to escalate privilege and penetrate deeper into the target organization. The idea is that a pen test is more valuable than a vulnerability assessment because while the latter points out all possible vulnerabilities (including false positives - problems that may or may not be potentially dangerous), the former graphically demonstrates the problem and proves it's existence (i.e. a pen test doesn't produce false positives).

There are several arguments against this model of testing. The first, which was clearly articulated byt Marcus Ranum, is that the incentive for the tester is to cause the client to lose, and the client only wins if the tester fails. This non-collaborative environment is bound to have difficulty in producing any sort of positive results. More importantly, pen testers can point out the problems your system might be having right now, but these results are temporal. That is, system A might be suffering from flaw B right now, but fixing that won't protect system A from flaw C that will emerge next week. That is, a pen test points out problems in a fixed time and space. Fixing the problems using the results of a pen test might prevent the same pen test from succeeding, but might not fix a pen test a year down the line.

The alternative to a penetration test is a full security audit. This process involves gray, or white box testing. That is, the tester, must look beyond the immediately exploitable vulnerabilities and determine why the architecture is failing. Why do the vulnerabilities exist in the first place? Is there effective patch management in the organization, are coding standards inclusive of security principles, are resources devoted to critical infrastructure? Doing a full security audit should protect an organization from not only the current threats, but also future ones.

Of course, after a full security audit it might be useful to perform periodic pen tests to check on the implementation, maintenance, and progress of the findings of a security audit. But even these pen tests aren't valuable unless there is a follow up to examine why they succeeded and failed.

I've heard of pen testers who turn a test around in a week. Given such a short time frame it is nearly impossible for anything valuable to emerge from such a test. The tester is likely firing off blind tools and probing the target using wrote means. This may prove an effective gauge of the resistance of the organization to script kiddie attacks, but it's not a responsible security review.

Hiring a pen tester to simply try to break into your organization is a waste of money in the vast majority of circumstances. Security can't be guaranteed and can only be measured in gradations. If the tester fails to get in it doesn't mean your organization is secure. If the tester does get in it doesn't insure that the most vulnerable part of the organization has been exposed. Even if the results of the pen test are thoroughly reviewed and fixes are implemented this may result in mis-allocation of resources. The allocation of resources based on a pen test is destined to skew towards the vulnerabilities the tester found, which isn't always sensible.

A thorough review of an organizations assets, their respective value, their topology, and the available security resources must be the basis of an effective security review. A pen test doesn't take any of these elements into consideration, it just pokes at the low hanging fruit. A pen test can't determine the relative value of assets to an organization, and in the short time span and black box nature of most pen tests they certainly don't stand much of a chance of identifying a vulnerability that relies on the complex interconnectedness that exists in almost any modern system.

What I mean to say is that a pen test can easily point out a SQL injection flaw in your website, but it can't easily demonstrate the much more dangerous vulnerability of unencrypted database backup tapes sitting in an employee's unlocked car. A pen test can find the obvious, but the nuanced security dangers that risk critical assets aren't going to be found without a thorough, top down evaluation that involves not only the tester, but also organizational personnel at all levels.

Hiring a pen tester isn't always a waste of money, but it needs to be part of a broader, more comprehensive security regime in order to be effective. Having an in house pen testing team that does white, gray, and black box testing in order to determine the effectiveness of a security plan that is drawn up prior to the testing is completely reasonable. Unfortunately, especially due to regulation, I fear that many organizations simply turn to a pen test in order to fulfill their security needs, leading to a false sense of security without necessarily positively affecting security postures.