SEI Advanced Incident Handling - Day 3

30 November -0001

The Software Engineering Institute, part of Carnegie Mellon University, and the organization that comprises CERT, offers an Advanced Incident Handling (AIH) course that I am currently attending. The course is offered in several locations, I'm taking it in Pittsburgh at SIE. The class is being offered in the SIE building, which is right between CMU and University of Pittsburgh's campuses.

Day 3 of the Advanced Incident Handling class followed the same breakneck pace. I would again emphasize the quality and diversity of the other participants in the class. There are 11 students in the class who include folks from Allegheny Digital, CERT, the Japanese Army and Navy, the FBI training center in Quantico, VA, the British Ministry of Defense, and others. It's a very high quality group of peers, which helps the class flow and provides some very interesting sidebar discussions.

Day three of the Advanced Incident Handling class began with a discussion of distributed denial of service attacks (DDOS) and defensive measures. We discussed the technical definitions and underpinnings of DDOS. We discussed the concept of denial of service, i.e. the lack of availability of resources to legitimate users, noting that denial of service may or may not be a technical endeavor. We discussed that a DOS might not target a service directly, but might be designed to interrupt supporting infrastructure that would cause the target service to go offline. Generally this involves searching for a "choke point" such as DNS or a service provider. The module pointed out that the characteristics of DDOS are distribution and coordination.

Next we discussed some typical configurations of DDOS botnets or attack platforms. We also worked through some of the impacts of DDOS; financial, collateral damage, backup, logging (IDS), etc. We also covered some typical motivations for DDOS, including financial motivations.

Then we went over the configurations of a typical IRC botnet. We observed commands that are used for initiating DDOS attacks and particularly observed the ease with which a DDOS can be launched from a typical platform.

We then turned to mitigation strategies for DDOS attacks. The primary mitigation seemed to be preparation. Over allocating resources, maintaining a backup netblock and active filtering are good proactive strategies to mitigate DDOS attacks.

We also went over strategies for detecting DDOS attacks. The instructor pointed us to the CERT document at

Responding to a DDOS attack is a tricky process. It is important to proceed carefully lets you overreact and block critical resources or exacerbate the effects of the DDOS. The class stressed working with your ISP to best address DDOS attacks. The use or rate limiting can also be effective.

We then continued our "war game" style exercises. During these exercises teams participate as members of a fictitious CSIRT. We get an initial briefing and can request extra material. The exercise is designed to teach investigative and response skills.

After the exercise we discussed handling major events. We started by defining a major event as an event "sufficiently large in scope that it significantly stresses your ability to respond." CSIRT response usually covers preparation, protection, detection, triage, and response. However, during a major event response changes. Additional people become involved and communication becomes critical. One key to successful major even response is having a plan in place prior to the event. It is also important to address escalating scopes of major events, for instance outlining scenarios when you might have multiple concurrent major events.

The day concluded with a hefty module on artifact analysis. An artifact is defined as any evidence left behind by an intruder. These could be logs, text files, binaries, toolkits, or almost anything else. We discussed the difference between forensic analysis and artifact analysis. Artifact analysis does not concern itself with procedure for presenting evidence in court nor chain of custody or rules of evidence.

The goal of artifact analysis was to understand functionality, see the artifact in operation, or enumerate the path of attackers. The class emphasized that artifact analysis should be done by trained programmers and computer architects.

We discussed strategies for artifact analysis. We covered "safe" procedures for collecting artifacts, strategies for storing, sorting, and organizing artifacts. We also discussed taking hashes of known artifacts for easy comparative analysis to future samples.

We discussed that not every CSIRT has the capabilities or resources to perform artifact analysis. However, we also discussed the fact that there are several different levels of artifact analysis: surface analysis, runtime analysis, and static analysis. Surface analysis is defined as examining the artifact using strings, hashes, logs or configuration details (i.e. easily retrievable text). Comparative analysis can also be used to compare characteristics of the malware that might be shared with other malware. Runtime analysis involves looking at the artifact while executing, and monitoring system, network, and other changes induced by the artifact. Static analysis is the actual reading of source code, which might include disassembling or reverse engineering the artifact.

We also went over the reverse engineering skills necessary for doing static artifact analysis. This covered using unpackers, building monitoring services, host monitoring, filesystem monitoring, process monitoring and network monitoring. It also included static analysis of shellcode.