Security Intelligence at Philly OWASP by Ed Bellis

25 May 2011
At the latest OWASP Philadelphia ( meeting on May 23rd, Ed Bellis, CEO of HoneyApps, Inc. (, spoke to the group about security intelligence. It was a wonderful talk, titled The Search for Intelligent Life, and was very thought provoking (slides at In many ways, Ed's presentation was in response to the many ideas presented in Shostack and Stewart's The New School of Information Security, which I've previously reviewed ( Ed began by laying out four stages of a proposed information security maturity model. The first stage was ignorance, typified by organizations that can't even perform basic auditing such as knowing what assets are deployed. In this stage organizations have no idea where there vulnerabilities exist. Ed points out that application security, in many ways, is extremely new, and lags far behind the maturity of networking security. This fact, combined with the commonality of security teams to emerge from operations staff (read IT support, networking operations, and system administration) rather than from development staff (i.e. programmers) adds to this problem. At stage one security is completely reactive, responding to reports of intrusions and compromise but failing to proactively find and resolve issues. Stage two is typified by the search for vulnerabilities. At this stage organizations are aware of problems, but can't quantify, or qualify them. Organizations at this stage may, or sadly may not, realize that it could take years to assess each application in their environment. Even after inventorying all of their assets organizations quickly realize that auditing each one does not scale and they could be stuck in a security audit for so long that by the time it is complete applications have aged away, changed radically, or been replaced, making response to the audits useless and impossible. Stage three is the "scan and dump" stage. At this level of maturity organizations are able to perform running vulnerability assessments that result in thousand page reports, which are turned over to dev teams who quickly "file them in the trash." Organizations mired at this stage typically suffer from a lack of communication and silo'ing that leaves security teams in a bubble. Security teams are divorced from process, or if they are built into process they are often seen as a roadblock to be bypassed. At stage three organizations are mature enough to recognize vulnerabilities in their assets, but incapable of fixing them, mainly because they cannot provide intelligent contextualization for the issues. Without meta data vulnerabilities are characterized by a machine assigned severity level that is often wildly inaccurate. Stage four is the bleeding edge of this maturity model, and Ed went through a case study scenario to demonstrate how organizations might achieve this level of maturity. In stage four organizations have a robust data repository and reporting capability. This facility can usually be found within most organizations, but is rarely purposed for security. Ed proposed that if organizations provisioned data collection and organization capabilities they already possess to perform rich data analysis of material they already have access to then useful security intelligence can be gleaned. In his case study Ed showed how an organization could start by identifying a vulnerability. They can add that vulnerability to a database and then connect that data to other data. This process begins by associating the vulnerability to simple asset data, such as the application the vulnerability exists in and the platform serving the application (J2EE, LAMP, etc.). Next the data can be tied to the host operating system, server software, and other machine data. After that the data can be linked to material about the development team responsible for the application (such as time to fix bugs, application life cycle, previous response time, etc.) as well as operational unit responsible for the server. This growing body of knowledge can then be linked to policy considerations surrounding the vulnerability. Next internal security log data can be linked from IDS, IPS, firewall, WAF and other defensive logs. Then external data can be linked in such as compromise data from sources like the Verizon DBIR (, DataLossDB (, and various industry ISAC's (for instance Finally the data can be linked to internal statistics from systems such as the bug tracking system, build and development logs from version control, etc. At stage four, armed with a robust, rich, database of material security staff can run reports that extend well beyond simple vulnerability reports. Instead of simply reporting to a dev team that they have a cross site scripting vulnerability in application X the security team can look at factors such as how likely is that vulnerability to be exploited given the defenses surrounding the server, they can examine the severity of the vulnerability from the perspective of network topology or criticality of data housed on the application server, they can extrapolate the time it takes for the assigned dev team to fix bugs and assign priority based on the most likely fixes to be applied in a timely manner. One can easily see that this type of data greatly expands the effectiveness of the security team by providing deep insight into problems beyond initial vulnerability report. For instance, if the security team notes that the XSS exists in an application on a host that performs critical functions, but notes an entry in version control mentioning that the entire application presentation layer is being migrated, the security team can approach the dev team and suggest implementing XSS remediation in the new presentation layer instead of simply firing off a report of an XSS that will (rightly) be ignored by the dev team that is in the process of changing the entire source of the problem. Beyond effective reporting, stage four allows organizations to quickly identify their true highest risk vulnerabilities by applying internal perspective to external reports. Organizations can see their relative risk quickly, and assess costs to fix issues by examining historical data and linking issues with specific groups responsible for fixing them. This data also allows organizations to make smart purchasing decisions by providing data to support, or refute, the value of new security tools. Ed did point out several non traditional tools that can be used to enhance this process. Many of them are familiar applications or programming languages that can easily be re-purposed for security tasks. Data mining and reporting applications as well as databases, are the most obvious of these tools. Perl, ruby, or other programming languages can easily be used as well (PERL, as the Practical Extraction and Reporting Language seem perfectly named for the job). Sed, awk, grep and other GNU utilities are also extremely useful. Organizations might also be able to leverage existing data reporting and application flow tools that they may have already pulled in-house for marketing, quality assurance, or other business units. The presentation was extremely thought provoking for a number of reasons. The first of which is that there aren't really any organizations that present at the fourth stage of this maturity model, so it is difficult to point to concrete successes. Also, the investment required to achieve stage four visibility may be quite large given that data must be collected, organized, and extrapolated by overworked security staff, who, by Ed's very admission, often percolate out of operations and have little, or no, database, data management or data mining experience. To me, the work required to reach stage four seemed like a perfect job for outsourcing or consultants. This way organizations would incur a one time expense to manage their transition and set up tools for stage four, but then could limit their investment to the current cost of the security team. An alternative view to this model occurred to me as I listened to the presentation as well. Enterprise architecture is becoming increasingly popular with businesses as a way to streamline their IT on a single platform or a limited number of platforms. Although decried because this aggregates risk, it also makes solutions to problems easy to find and remediate. If your entire organization uses a single platform, and vulnerability is discovered in that platform, it can be fixed in a single place. I proposed this as an alternative and Ed emphatically recounted the graveyard of lost dreams that typifies his experience with this strategy. Extremely expensive and prone to failure, to Ed this seemed the worst possible approach to the issue. Overall the presentation was extremely well done and well received. I think there were a lot of great ideas and I saw many ways to improve my own security process by implementing various aspects of the fourth stage of this model. It strikes me that Ed's ideas are very revolutionary, or completely off the mark, and I can't easily decide which. Ed seems like an intelligent person with lots of great experience so I'd lean toward revolutionary, but without a success story this model is a little like the theoretical computational exercises found in academia that seem like awesome ideas but may or may not work out in practice (think TPM chips). Regardless, the strategies and approaches suggested in stage four can provide immediate benefit to any organization, even if they aren't able to advance completely into stage four and get stuck dealing with thousand page vulnerability reports clogging up their recycling bins once a quarter. I immediately recognized that my IDS doens't contextualize many reports thoroughly enough and that I certainly don't track the data aggregated by my IDS and my internal asset scans nearly well enough. The simple idea of pushing security data into a central repository, enriching it with meta data, and using that repository for intelligent decision making and analysis is something every organization should invest in as a quick, inexpensive way to add value to their security efforts with very little overhead