Dear Security Team: You Suck!

27 August 2013

I'm a typical developer or administrator. I've just shown up to work. I drop my bag and grab a cup of coffee, I log into my machine. I've already looked at my schedule on my smartphone and my day is free. I've spent the morning commute thinking about my development roadmap or new projects I can knock out today. The day is a blank slate and I'm excited to tackle some productive work for a change. I open up my e-mail really quickly before I get started and there it is - a ticket from the security team. My heart skips a beat. Hopefully nothing catastrophic happened! I begin to read the ticket and there's an extremely brief description of a "vulnerability" that I don't quite understand. There is nothing else. The implication is that this is serious - otherwise why would I get a ticket? I begin doing some web searches to try and figure out what a CVE number is and more information about this vulnerability. After half an hour I figure out that this could be a problem, maybe, but the issue manifests on a test system that nobody outside the organization has access to. I begin some more searching. Eventually I find a tool that will test this vulnerability. It takes me about an hour to install the tool and all the dependencies and get it running - and sure enough the vulnerability is real! Curious, however, I put a copy of the tool in my remote development environment and run it. Nothing. I'm confused, if the vulnerability only appears inside our network it shouldn't really be a problem. Can't security monitor our entire internal network anyway? I e-mail back to update the ticket to report my findings and ask for more explanation. I quickly get a response saying that the security team already tested and confirmed the issue and that I need to take it seriously and not fixing it is irresponsible. I glance at the clock and notice my day is half over. I mentally prepare a letter that begins: "Dear security team, you suck!"

Sadly this circumstance plays out time and again at organizations in all industries. It even happens to internal security teams that report to other security teams. It engenders bad feelings, a desire to find ways to avoid the security team and shirk security related communications. It leads to an overall breakdown in security function, and it's entirely the security team's fault. What's going on here? Why does this happen and how can we work to fix it?

Security Policy

Security testing should follow a number of clear principles in order to be effective. The first of these should be a concise security policy. I don't mean the type of policy that says you can't browse porn at work, I mean the kind of policy that lays out priorities and goals. Your security policy should define the types of problems that the security team is attempting to address and a list of current priorities. These priorities should be based on data showing that issues are serious, that they represent a present danger, and that they're addressable.

Your security team should never choose issues to tackle based on lottery, or personal interest. Security should expect the very reasonable response to any trouble ticket: "why should I care about this?" and have a clear and concise answer at the ready. There are nearly unlimited vulnerabilities in any organization, but very real limits on resources. How do you choose which vulnerabilities to address, and which to ignore? Can you demonstrate that vulnerabilities you choose to address represent real threats to the organization? If so how? Anecdotal evidence, cases you heard about in the news or from peers, or your personal gut feelings, are insufficient justification of a threat. If your team relies on these types of metrics expect push-back from your customers, and resentment. If your team "feels" something is an issue, and your customers disagree, and your team is forced to fall back on the line "we're the security guys, we know better" then you're doing you job all wrong.

How can we choose priorities in a way that we can quantifiably justify to the organization? There are any number of metrics that can be used. How often has a particular vulnerability been involved in an incident? Would a compromise of a certain service carry a regulatory or other cost? How often are probes for the service noted on your network? Is the service externally accessible? Are there publicly available exploits for the vulnerability? If you can point to these numbers then you can express a high degree of confidence that a vulnerability is important, above and beyond your team's subject matter expertise.

Communication is Key

After you choose priorities, you must communicate them to your organization. You should ensure that everyone understands the security team's motivations and goals. This helps to ease the communications channels between the team and clients. Furthermore you should seek feedback on your priorities. You may have overlooked data or circumstances that actually invalidate certain chosen priorities. Giving your organization a voice helps to avoid issues in such cases.

Make it Scientific

Once you choose a priority you must then choose a way to test for conditions you wish to address. Testing shouldn't be ad hoc, it needs to be scientific. It needs to be automated and repeatable. Without automation your tests won't scale, and they'll be error prone. Furthermore, ad hoc tests are transparently incongruous with priorities. If a test is synchronous with a stated security priority, put the time into making the test a priority. If testing is a program that someone runs to populate a spreadsheet on their desktop then you've got a problem. Your tests should run on a schedule, and populate a database, and integrate with a ticketing and tracking system.

Be Specific

Speaking of tickets, when you send off a security ticket you must be sure to include information in the ticket. At the point the security team sends off a ticket they're infringing on another team's time and resources. Take the time and be courteous and respectful of the impact a ticket will have. Vague references to a problem, or a vulnerability, are unhelpful. Explain the exact nature of the problem that was detected. Explain how the detection works, and why (back to priorities). Explain the potential impact of any vulnerability as well as steps to remediation. Also, include some sense of the urgency of the issue. Not every security issue is worthy of disrupting someone's whole day. Without context, security ticket recipients have no idea if they can conveniently schedule a fix into their week's work or if they have to drop everything and fix a problem right away.

How to do it Wrong

Someone in your security team discovers a telnet brute forcer and figures they'll try and brute force every telnet server in your organization. They successfully identify a dozen multifunction printers and other embedded devices with easily guessed passwords and send off e-mails to IT support to fix the problems. Job well done.

This type of testing has all the hallmarks of security gone wrong. Firstly, this test lacks priority. What is the justification for spending time testing telnet ports? Just because the service exists doesn't mean it necessarily deserves attention. What if there is data that shows that unpatched machines is the leading cause of malware infection and lost staff time in the organization? Does the security team even collect this sort of data? If not they can't possibly justify testing anything. Without aligning resource expenditure to some metric and goal security efforts are just flailing in the dark. These types of programs are bound to engender ill will from customers and demonstrate a lack of professionalism and maturity in a security team.

Secondly this type of test isn't scientific. It isn't automated or repeatable. There is no stated measure of progress. Does the team wish to reduce the number of easily guessable telnet credentials? If so, how? If the test isn't repeated how can progress be measured. How often will testing be repeated? What if there is a very real justification for a service having an easily guessable password? Is there a way to indicate that although the service is vulnerable there are other mitigations in place? All of these questions need to be addressed or the test is essentially worthless.

Finally, communications will fail with this test, largely because the test isn't justified and lacks metrics. There is no way a ticket about the issue discovered by the test can be crafted in a helpful way. Without addressing the first two problems with this approach, the final problem of communication can never be overcome.

The Customer is Always Right

Far too many security teams see their role as enforcers. This is the exactly wrong way to approach security. Security is a service, like any other IT service, and the organization is full of customers, not perpetrators or suspects. Security goals need to align to a service aspect. How can security best serve the organization (priorities)? How can tickets be crafted to demonstrate that security is attempting to add value, not punish (communication)? If someone questions a security resource expense how can we turn to metrics to show that security adds value (scientific process)? Start thinking of your organization as a group of customers and you'll build a stronger, more valuable security team.

Security Correctly

Before you ever send off a security ticket stop and ask yourself if the ticket aligns with priorities. Better yet, don't test for vulnerabilities unless you can justify that the testing aligns with priorities. Although it's tempting to run off and "do something" every security team should start with observation. Collect data about what is happening, how, and why. Far too often security teams fall into a completely reactive state, chasing problems, putting out fires, and generally fighting the last war. In order to be proactive security teams have to step away from the hamster wheel and take a holistic view of the environment, collect data, and develop a scientific process.

Computer security isn't voodoo, it's part of computer science. The cornerstone of the scientific process is a repeatable experiment with verifiable results. Security should adopt this approach. First, measure the environment and establish goals. Next test for cases where you can address issues to meet goals. Develop a process for systematically addressing a priority and a separate way to measure progress. Establish a periodic review so that you can evaluate your success (or lack thereof). If you can do this then you're well on your way to establishing a mature, respected security organization that can demonstrably add value to any organization.

Or you can keep fighting the latest vulnerability du jour, but don't be surprised by the contempt of your co-workers...