Thinking Security

30 November -0001

The rise of the APT (http://taosecurity.blogspot.com/search/label/apt) as a reality has moved several issues to the forefront of the information security community. The APT has highlighted in stark detail the lack of security in software. Many of the publicized, high profile APT incidents involve attackers gaining an organizational foothold using targeted client side attacks, and even 0 day vulnerabilities.

In “Code Complete” (http://www.amazon.com/Code-Complete-Practical-Handbook-Construction/dp/0735619670) Steve McConnell highlights some statistics concerning the number of errors typically encounter per 1000 lines of code (KLOC). The numbers he cites are around 10-15 errors per KLOC. If one considers the possibility that even one of these bugs is security related, and that most programs run into the tens, if not, hundreds of thousands (or millions) of lines of code, the problem becomes apparent. These numbers serve to advise that the budding security researchers need not rush to find the latest 0 day – as long as code is produced there will be security vulnerabilities. Security vulnerabilities are now a permanent fact of life.

Historically, as computers became affordable enough for home users, the emergence of malware stared with the infection of portable media, mainly floppy disks. Because of the “air gap” imposed by technology, the spread of malware, and the exposure to vulnerability for most users, was limited. Virus code simply could not spread without human intervention. The Morris worm changed that paradigm, but at the time of it's release it only affected infrastructure servers, rather than end user software. As network connection has become more affordable, however, increasing numbers of consumer grade computers are becoming always on, and always connected to the network.

Today, computers are largely viewed as useless without a network connection. The proliferation of network connection has meant that computers can always access the resources of remote hosts, but it conversely means that remote resources can always access these computers as well. The explosion in networked machines means that vulnerabilities in software are instantly exposed to attack from a global network. As long as the machine is on, it is exposed to vulnerability. The same phenomena that has allowed software manufacturers to deploy services such as automatic updates in the middle of the night has allowed attackers to exploit software at almost any time of day. Not only are computers connected 24/7 but also their numbers are increasing. Software and network connection are migrating into new arenas such as portable devices, consumer electronics, and hardware other than a traditions computers.

Reacting to this explosive growth in connectivity and vulnerable attack surface is a daunting challenge. The earliest response was the firewall – a networking solution to a software problem. The firewall was meant to mitigate threat by restricting access to services. Many in the security community argue that the firewall is dead, and that the paradigm upon which it is based is broken. The argument is that there are vulnerabilities in software, and in order to mitigate these vulnerabilities a new technology was implemented. It follows that the purpose of the firewall is to protect vulnerabilities rather than to fix them. Additionally firewalls must, by their very nature, allow certain traffic to pass through. With the passage of time application developers reacted to these restrictions by coding new programs and services to use protocols and ports that are commonly allowed by firewalls (think TCP port 80), in order to bypass the hassle that firewalls present. Thus, firewalls become useless as programmers deploy newly vulnerable programs in such a way so as to prevent the firewall from providing protection. The wretched outgrowth of this process is the introduction of new firewalls, for instance the web application firewall, thus perpetuating an endless cycle.

The problem with this argument is that it assumes that code can actually be secure. There are numerous tomes written on the topic of secure coding, but what if, for a moment, we consider the possibility that no code is secure? Even if code may be called secure today new threat vectors are being discovered every day. A decade ago the cross site request forgery (XSRF) had never been considered. Code that would today be instantly recognizable as heinously vulnerable to XSRF might have passed the most stringent security audit a decade ago simply because the attack vector was not know. Thus, even if one assumes that a piece of code can be thoroughly vetted, reviewed, and deemed secure today, that fact does not mean the software will be secure tomorrow.

Given, then, that all software contains bugs, and that we cannot certify software as safe with any measure of certainty, and that software is under almost constant threat from the network, and that traditional mitigation such as firewalls may be useless, what is the information security community to do? I'm not sure I have the answer, but I think approaching infosec from this paradigm might be a useful exercise. How does one plan an information security program or response policy with the full understanding that it may be impossible to prevent vulnerability exposure (and perhaps intrusion)? What then are the goals of information security? Is defence in depth an option? Should defence become data-centric? Does security become a moving target with every day starting with a new struggle to keep systems up to date and protected from the latest, emerging threat vector?

I think the last option is probably the one we should be looking at. Security is a process rather than a product, as Bruce Schneier is known for saying (http://www.schneier.com/crypto-gram-0005.html). Perhaps we need to face each new day as a challenge and put people at the forefront of a process, rather than appliances at the forefront of a product (and compliance) based approach.