Journey to the Centre of the Breach Ben Downton June 2, 2010
Abstract Computer forensics is no longer exclusively the domain of law enforcement investigators. The same techniques applied to gathering evidence for use in court can also be applied to investigating a security incident in order to provide the victim with information and assurance. In this report, a case study is presented that details the tools and techniques used in the investigation of a breach of an FTP server, from the initial log file analysis through to reverse engineering the discovered malware.
Acknowledgements I would like to thank the University of Bedfordshire and Dr. Paul Sant for providing for this project, MWR InfoSecurity for giving me a platform to perform and publish this work, and finally Rhiannon for diligently proof-reading and ing me throughout.
Contents 1
2
3
4
Introduction 1.1 Problem Statement . . . . . . . 1.2 Aims and Objectives . . . . . . 1.3 Literature and Tool Review . . . 1.3.1 Forensic Investigations . 1.3.2 Malware Analysis . . . 1.3.3 Threats and Exploitation 1.3.4 Tool Review . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
. . . . . . .
3 3 3 4 4 5 5 6
Background 2.1 Key Players . . . . . . . . . 2.2 Victim’s Actions . . . . . . 2.3 Recommendations for Victim 2.4 Compromise . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
7 7 8 8 9
. . . .
. . . .
Log File Analysis 3.1 Identifying the Attacker . . . . . . 3.1.1 Failures . . . . . . 3.1.2 Suspect s . . . . . . 3.2 Identifying Attacker Activity . . . 3.2.1 First . . . . . . . 3.2.2 Enumerating Permissions 3.2.3 File s . . . . . . . 3.3 Conclusions . . . . . . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
11 11 11 12 13 13 14 16 20
Malware Analysis 4.1 Antivirus Considerations . . . . . . . . 4.2 Static Analysis . . . . . . . . . . . . . 4.3 Live Analysis - Playing in the Sandbox . 4.3.1 Dropping the Payload . . . . . 4.3.2 Vulnerability Exploitation . . . 4.4 Conclusions . . . . . . . . . . . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
. . . . . .
21 22 23 25 25 27 28
1
. . . . . . . .
. . . . . . . .
CONTENTS 5
Reverse Engineering and Unpacking 5.1 Packers, Wrappers and Binders . . . . 5.2 Unpacking the Virus . . . . . . . . . 5.2.1 Anti-Debugging . . . . . . . 5.2.2 Manual Coding Artefacts . . . 5.2.3 Bying Invalid Instructions 5.2.4 The Unpacking Loop . . . . . 5.2.5 Identification . . . . . . . . . 5.3 Conclusions . . . . . . . . . . . . . .
2
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
. . . . . . . .
29 29 30 30 31 33 36 37 39
6
Remedial Actions
40
7
Conclusions
42
A Code A.1 FTPCHK3.php Removal . . . . . . . . . . . . . . . . . . . . . .
44 44
Chapter 1
Introduction 1.1
Problem Statement
A client has reported that the FTP server (supplied by the victim) that is used to store the clients data has been misconfigured. A full investigation has been demanded by the client to ensure that their data has not been compromised. As a result of the initial investigation, suspicious activity by a has been noted and further investigation by a forensic examiner has been requested.
1.2
Aims and Objectives
The report will present a case study for an investigation into a security incident. The investigation will be described from the point of view of an external examiner and will address the issues of the client-supplier relationship as well as the detailed techniques used. As the case study comprises evidence gathered from a fully operational environment, techniques of evaluating the relevance of suspect information will be discussed. The goal of the case study is to illustrate the tools and techniques used to conduct an investigation into a security breach to establish the full cause and extent of such an incident. In all stages detailed technical output will be given to illustrate the findings and results, including: • Client/Supplier relationship and potential issues • Detailed log file analysis • Static file analysis • Live malware analysis • Reverse engineering and unpacking malware
3
CHAPTER 1. INTRODUCTION
4
Finally, the conclusion of the work will be to illustrate the impact caused by the security incident and detail some of the failings that led to the incident. Countermeasures that would prevent a similar incident from occurring will also be discussed. The outcome of this work will be: • Presentation by example of a methodology that evolves as an investigation continues • Highlighting the advantages and disadvantages of forensic tools and techniques • Raising awareness of how security weaknesses are exploited in the real world • Encouraging manual approaches to forensic analysis where automated tools may fail • Furthering the field of computer forensics through documenting methods of defeating a known malware packer
1.3
Literature and Tool Review
The field of computer forensic analysis is not as rapidly changing as other security fields, such as penetration testing, as the core tenets defined by the AO guidelines will apply in any situation. However, when using forensic techniques for incident response, it is important that the investigator has knowledge of current threats, as without this knowledge a new attack may go undiscovered. This is even more crucial when dealing with malware; as anti-virus companies have large amounts of resources to put towards defeating malware, so too must attackers to be successful.
1.3.1
Forensic Investigations
The core tenets of a forensic investigation are defined by the AO guidelines which apply to all forensic investigations. These guidelines are available from either the AO or 7Safe website and contain details on the proper procedure for dealing with the acquisition of data and subsequent analysis from various electronic devices. Specific analysis techniques are not detailed, however an investigator that conducts an investigation within the frameworks presented can be confident of the integrity of any evidence and results gathered. A commonly referenced book is File System Forensic Analysis by Carrier (2005). The book covers FAT, NTFS, Ext2, Ext3, UFS1 and UFS2 file systems in great detail and is an excellent reference guide for a forensic investigator. Though commonly used digital forensic tools can understand a range of file systems and recover large quantities of information from them automatically, an understanding
CHAPTER 1. INTRODUCTION
5
of the underlying structure allows an investigator to take a manual approach. The book also presents examples on using the Sleuth Kit and Autopsy tools (which are included in the Helix live distribution). As will be shown in this report, a manual approach ed by tools has the best chance of success. It is vital that an investigator has a good understanding of the software they are investigating. To that end, the ing documentation for Ipswitch WS FTP server was a particularly useful resource as it described in detail the commands used and the format of the log file entries. This then allowed searches to be constructed with a greater degree of accuracy.
1.3.2
Malware Analysis
As malware analysis is one of the fastest changing fields, the majority of information on malware infections (and how to defeat them) is anecdotal evidence on the Internet. This typically falls into two categories; blogs, where a presents details of an infection, and forums, where a requests help in dealing with an infection. Whilst the anecdotal evidence presented in these mediums have a higher chance of being incorrect, especially with malware infections having many variants, it is also more easily verifiable. Any research that may be appropriate can be quickly tested on the malware sample in question and the results compared - this approach is seen in Section 5.2.4 when comparing this analysis to Martyanov’s (2008) analysis of the Lighty Compressor. The article written by Martyanov also raises the point that research on more obscure malware may not necessarily be conducted in English, as this article had to be translated from Russian. The meaning of the post was clear in this translation, but any hypotheses in other languages should be treated with extreme caution, as their meaning or intent could be lost on translation. Some of the other blogs with articles relevant to this research included ‘RemoteDesktop’ and ‘the Digital Me’. There are not many books available on malware threats, as the field changes rapidly. Articles and presentations on malware analysis are more common but typically revolve around new detection techniques or tools, such as Preda et al. (2008), Kolbitsch et al. (2009), and Cha et al. (2010). For analysis details of specific infections, the best sources of information are knowledge bases maintained by anti-virus vendors, such as the TrendLabs blog, McAfee Threat Resources and BitDefender Defense Center.
1.3.3
Threats and Exploitation
For information on particular vulnerabilities the CVE (Common Vulnerabilities and Exposures) resource, which indexes publicly known information security vulnerabilities, is particularly useful. This site was used to identify a known vulnerability in the system, which is discussed in Section 4.3.1. A similar resource
CHAPTER 1. INTRODUCTION
6
that relates specifically to Microsoft products is the Microsoft Security Bulletin, detailing vulnerabilities and supplying patches to resolve them. A vital source of information on current threats is research presented at security conferences. Holt et al. (2009) presented research on predicting threats in the Russian hacker community based on information gathered from LiveJournal s. Similarly Granick and Opsahl (2009) presented research on high profile computer crime cases. Richards and Ligh (2009) and Percoco and Ilyas (2009) both presented at Defcon 17 on real world malware samples extracted from attempted attacks.
1.3.4
Tool Review
A key part of a forensic analysis is in using the right tools and techniques to reach the appropriate conclusions. Particularly with incident response, where timing may be more critical, it is vital that the investigator has a toolkit that allows them to deal with any possible situation. The tools used in this investigation are: • grep - a command line tool written for Unix systems for searching for text • whois - a tool for querying RIPE databases to determine registration information for IP addresses and domains • hexedit - a tool for viewing and editing the raw data of a file in hex format • BackTrack - a live Linux distribution containing a number of security testing and forensics tools • Helix - a live Linux distribution containing tools and features geared towards forensic analysis • VMWare - virtualisation software used to create a virtual machine to be used as a sandbox environment for malware analysis • AVG Antivirus - anti-virus software for detecting and removing malware • FileAlyzer - file analysis tool developed by SaferNetworking • Sysinternals - a suite of tools designed to help manage, troubleshoot and diagnose Windows systems and software • Immunity Debugger - a debugger specifically designed for the security industry, ing malware analysis, reverse engineering and exploit development • IDA Pro - commercial disassembling and debugging software • PEiD - a tool for identifying packers used in malware samples
Chapter 2
Background The server is known to be running a vulnerable version of Ipswitch WS FTP software (3.1.4) with an Internet facing web interface. This version of software contains a number of known vulnerabilities, including buffer overflows (allowing for remote code execution) and Denial of Service (DoS) conditions. These vulnerabilities were known to Victim at the time as a result of previous penetration tests highlighting the issue, however the resources to resolve the issue have not been made available. No file integrity monitoring or anti-virus software is in place on the FTP server. Whilst it can be argued that file integrity monitoring software for an FTP server may produce too many alerts to be valuable, software can be configured to monitor critical system, archive and log files for alterations. The log files provided were assumed to be complete and safe from corruption, however due to the nature of such a security incident their integrity cannot be guaranteed. Whilst no evidence was found that the log files had been compromised, it should be noted that a sophisticated attacker that gains access to the server may alter the log files. The log files provided were approximately 30Gb in size and dated back over 18 months.
2.1
Key Players
The following key players in this incident are named below, with a brief summary of their involvement: • Investigator - The third-party investigator for this particular investigation • Victim - The owner of the FTP server that has been reported compromised. The investigation was commissioned by Victim • Client - A client of Victim and the reporter of the security incident • Attacker - The person identified as causing the security incident. Note that the compromised is named Attacker 7
CHAPTER 2. BACKGROUND
8
• Suspect 1 - An additional suspicious identified by Victim • Suspect 2 - An additional suspicious identified by Victim
2.2
Victim’s Actions
In many environments, particularly commercially driven ones, it is typical that the organisation will wish to contain the incident as soon as possible. It is important therefore that the processes undertaken preserve any evidence. This can include quarantining and sealing any infected removable media (such as USB devices or DVD media), shutting down systems so that no further changes are made, or preventing mobile devices from being remotely erased. The investigator must be prepared for the possibility of forensic evidence being potentially contaminated by prior investigations conducted by a non-expert. The investigator should also understand that commercial considerations may mean that forensic acquisition is not possible. Investigations can encounter delays when third parties are involved, particularly when on-site access is required to secure facilities. In addition, delays in obtaining information (such as log files, backups, information regarding personnel etc.) can lengthen the investigation. It is known that the Victim has conducted a limited investigation prior to engaging a specialist - artifacts of this investigation will be shown in Chapter 3. In particular, Victim has disabled the ‘attacker’ , which was confirmed in the most recent log entries as attacker ERR:logon disabled when attempts were made to as that . The IP addresses identified as attempting to to the after it had been disabled were noted and added to the suspect IP list. At the time of the initial investigation the log files were stored on the FTP server itself. To limit the risk that the log files could be tampered with they have been stored in a secure location away from the targeted server. The log files were too large to be stored on read-only media (such as a DVD-ROM), which would have ensured their integrity for future investigation. The log files were instead sent via email in a PGP signed and encrypted zip file to ensure that the confidentiality or integrity of the files were not altered in transit.
2.3
Recommendations for Victim
It was recommended that Victim continue to monitor server logs for failed access attempts and ensure that further breaches do not take place. Monitoring of the log files after the had been disabled has already revealed IP addresses that can be added to the suspect list. Any other activity might reveal more information about the attacker. Firewall and IDS/IPS logs were also closely monitored for suspicious IP addresses (identified as a result of this investigation) that may signify further
CHAPTER 2. BACKGROUND
9
attacks on the network. An opportunistic attacker may not continue the attack once the vulnerabilities have been resolved, however a determined attacker that is targeting Victim specifically will likely seek out other avenues of attack. In order to maintain a good working relationship with Client it is also recommended that Victim engage in discussions about how best to maintain the service offered. It may not be the place of the investigator to determine how Victim should proceed, however that does not preclude them from offering advice from a trusted position. In this case, Client has demanded that provision is made for all file transfers to be conducted over SFTP. This is a more secure alternative to FTP offering encryption, message integrity checking and identity verification by, for example, public keys. The use of SFTP offers additional protection against traffic sniffing or man-in-the-middle (MitM) attacks to recover Client’s credentials; it will not provide protection against the use of weak or easily guessable s. As a general recommendation, Victim should ensure that sufficient technical resource is available to the investigation. Any delays in acquiring information can cost Victim in wasted time on the part of the investigator. Furthermore, failure to properly contain the incident in a timelay manner can expose the network to a greater risk of attack.
2.4
Compromise
Before the analysis begins, it is important to consider the methods by which a could become compromised. This allows the investigator to identify avenues of further investigation based on the likelihood of occurrence. In this particular case, there are four major scenarios to consider: • Malicious ex-employee • Exploitation of software vulnerability • Leaked credentials • Brute Force/Dictionary attack The first case is that of a malicious ex-employee with remote access to their . This scenario is likely, particularly for a company that does not have a formalised ers and leavers process for staff. A survey by Net Security (2009) revealed that 41% of employees have taken sensitive data with them to a new position, indicating the value that employees place on their (ex)company’s data. At this point it becomes useful to discuss the circumstances in which the employee left and any other relevant information (such as position, access levels and technical ability) with the HR department. For this case study it was noted that the employee left on reasonably good , though it was likely that they would have continued working in the same industry. The employee was not particularly
CHAPTER 2. BACKGROUND
10
technically proficient, and would have known what areas of the server they would and would not have been able to access. The exploitation of a vulnerability in the FTP server is also a possible scenario, given that the version in use contains a number of known vulnerabilities (for which exploits are well documented). Again this is a situation where useful information can be gathered through discussions with the Victim. The that has been compromised did not have a high level of access; it is likely that exploitation of the vulnerabilities in the software would have resulted in root access to the system rather than an individual lower privilege . Even had the software been up to date, the possibility of a zero-day (0-day) attack being executed against the server should be considered. It is possible that an could become compromised through leaked credentials. This becomes even more of an issue when there are limited policy controls in place (such as no expiration or history). credentials could be leaked unintentionally through a number of methods. They could become compromised in transit as the FTP protocol does not encryption (unlike the more secure alternative SFTP), meaning credentials are sent over clear-text channels. credentials could also become compromised by targeting the owner of the credentials directly, as they may be cached on the owners system or even stored in a text file or written on a piece of paper. The owner of the credentials could also be the target of a phishing attack where, if successful, they would unwittingly submit credentials to an attacker. It was noted by Victim that the original owner of the was particularly ‘mobile’, attending conferences and working from public locations, which would have increased the risk that the owner would be the target of an attack. A common method through which s can become compromised is by an automated brute force or dictionary attack. This type of attack involves an attacker attempting to to s with multiple different s until a successful occurs. This attack is not particularly stealthy, as large volumes of traffic and log file evidence can be identified, but can have a high chance of success where a weak policy is in place. controls, such as applying temporary lockouts for multiple logon failures, regular forced changes and a history can significantly reduce the threat of brute force attacks. The success of a brute force attack can be made more likely through the use of specially crafted dictionaries. Where a brute force attack would attempt all s between for example a to ZZZZZZZZ, a dictionary attack uses a list of commonly used s. This increases the likelihood of finding the quickly at the cost of potential success should the used not be in the dictionary. Typically a brute force attack will show up in logs as multiple failures, followed by a successful , followed by more failures. This is due to the automated nature of such attacks, where an attacker will commonly leave an attack tool or script running in the background whilst performing other tasks.
Chapter 3
Log File Analysis 3.1
Identifying the Attacker
The first stage of analysis is to find methods of identifying the attacker. This could be through behavioural patterns or common links such as IP address or geographical location. Since the was known to be in use by a legitimate up until a certain date, it is important to clarify the difference between legitimate activity and attacker activity.
3.1.1
Failures
Brute force and dictionary attacks can be easily identified in log files as they typically have a number of key features (discussed further in Section 2.4), such as: • Multiple failures • Logon failures are in rapid succession indicating automation • A logon success is normally followed by more logon failures Evidence of a brute force attack would identify the earliest time that the was compromised, as well as identify whether the attack was successful (through evidence of a logon success). A search through the log files for a high number of logon failures would highlight evidence of a brute force attack. Figure 3.1 shows an example log entry for failed logon attempts. A number of logon attempts for the 3rd of March at approximately 18:13 onwards were identified from the same IP address (marked below as 71.62.X.X). Once an IP address has been identified as suspicious it is always advisable to try and pinpoint the owner of that IP address. Whilst it is common for an organised attacker to utilise compromised machines in many geographical locations (to evade detection), it is less common for malicious insiders to have these resources available. It is also useful to identify not just the owner of an IP address but the geographical 11
CHAPTER 3. LOG FILE ANALYSIS
12
location, which may point to areas for further investigation or allow a conviction to be pursued. 0303 18:13:09 (00001580) 71.62.X.X:64298 connected to 10.250.50.6:21 0303 18:13:09 (00001580) ftp.victim.com D(0) 71.62.X.X UNK attacker 0303 18:13:09 (00001580) ftp.victim.com U(Logon_Fail) 71.62.X.X attacker ERR:logon failure (A2) 0303 18:13:41 (000012ec) 71.62.X.X:55101 connected to 10.250.50.6:21 0303 18:13:41 (000012ec) ftp.victim.com D(0) 71.62.X.X UNK attacker 0303 18:13:41 (000012ec) ftp.victim.com U(Logon_Fail) 71.62.X.X attacker ERR:logon failure (A2)
Figure 3.1: Logon Failures by the ‘attacker’ The whois tool can be used to query Regional Internet Registries (RIRs) which hold , ownership and status information for domains and IP addresses. An example of part of the whois output for the IP address above is given in Figure 3.2, identifying the IP address as belonging to Comcast (an ISP) in New Jersey. This IP address was queried with Victim, after some research it transpired that this IP address belonged to the US office that had conducted some of the initial investigation. The logon failures identified were a result of the IT department attempting to guess the for the ‘attacker’ in order to investigate further. Whilst this did not significantly hinder the investigation, it highlights some of the issues involved with non-experts conducting investigations as part of the initial response. Where organisations do not have a specific incident response team, it is critical that the consequences of any actions taken prior to engaging an expert are fully understood and documented. whois 71.62.X.X OrgName: OrgID: Address: City: StateProv: PostalCode: Country:
Comcast Cable Communications Holdings, Inc CCCH-3 1800 Bishops Gate Blvd Mt Laurel NJ 08054 US
Figure 3.2: whois entry for a suspect IP address Further searches for failures by the attacker were conducted using a simple grep command: grep "attacker ERR:logon failure" *. Whilst a number of logon failures were identified, there was not a significant number (or in succession) that would indicate a brute force attack.
3.1.2
Suspect s
Whilst the investigation was being carried out, Victim had been continuing their own investigation of log files. In addition to the investigation into the attacker , it was also requested that the attempts in Figure 3.3 are investigated.
CHAPTER 3. LOG FILE ANALYSIS
13
0827 21:40:50 (000012a0) 192.28.x.x:47310 connected to 10.250.50.6:21 0827 21:40:50 (000012a0) ftp.victim.com D(0) 192.28.x.x UNK XAUT 2 :8C
<667C4B2:?56D=>4<7:?@6C?;862D7>983:5@2 0827 21:40:50 (000012a0) ftp.victim.com S(0) 192.28.x.x suspect1 logon success (B1) 0905 20:12:24 (00000e48) 12.192.x.x:50176 connected to 10.250.50.6:21 0905 20:12:24 (00000e48) ftp.victim.com D(0) 12.192.x.x UNK XAUT 2 88>5C?5;85>:C>828:=@@>4:;1>7D=5?85>5B36:7=>5@76>;; 0905 20:12:24 (00000e48) ftp.victim.com S(0) 12.192.x.x suspect2 logon success (B1)
Figure 3.3: Suspicious logon attempts The messages in Figure 3.3 were flagged by Victim as suspicious due to the apparent ‘randomness’ of the data in the message. As this type of data was not commonly seen by Victim it was not known whether the data was indicative of abnormal behaviour (such as an attempt to launch exploit code or fuzz the service) or legitimate operations. The seemingly random code in the messages above is typical behaviour for an XAUT logon attempt where the name and combination is encrypted. Examination of other similar attempts revealed that this activity was normal not only for the suspect1 and suspect2 s but also for a number of other s in normal use. Furthermore, a search for any activity by the s suspect1 and suspect2 matching the list of suspect IP addresses revealed no matches. This information, in addition to the fact that no or attempts were made to Client’s folders, was enough to exclude the suspect1 and suspect2 s from further investigation for the time being.
3.2
Identifying Attacker Activity
In order to narrow down the time window under investigation it is important to establish at what point an attacker gains access to an . Since there was no evidence of a brute force attack taking place it may be more difficult to establish when the attacker gained control of this . Analysis of behavioural patterns however can indicate whether activity was conducted by a legitimate or an attacker.
3.2.1
First
A critical part of the investigation was to identify exactly when the was first used fraudulently. The possibility of the attacker being the original owner of the was not ruled out entirely and so identifying patterns of use could both narrow down the window under investigation and provide clues about the identity of the attacker. Given that the log files provided cover over two years of traffic (and are over
CHAPTER 3. LOG FILE ANALYSIS
14
30Gb in size), narrowing down the window of investigation in the early stages can significantly speed up future searches. A search for all successful logons to the attacker revealed no activity between March and November. It was confirmed by Victim that the original owner of the left in March, which corresponds to the time that the activity stopped. This information narrows down the window of intrusion (the time between first and incident containment) to five months, significantly smaller than the log files provided. The IP addresses used to access the from November onwards were logged and added to the suspicious IP list. It should be noted that, for the results given in this section, the IP address of the attacker is not consistent. This is typical for remote attacks where an attacker will ‘pivot’ attacks through other compromised machines. Launching attacks from various different locations makes it more difficult to identify the attacker, both at a physical level (for example to pursue a conviction) and at a logical level (such as blocking access from specific ‘at risk’ IP addresses). The majority of attacks appeared to originate from countries such as the Ukraine, China, Russia, and one instance from a library in the US. The wide range of attack origins indicate that it is likely that compromised machines have been used as the final hop before the attack on Victim’s servers. Public machines such as those in libraries are often not subject to such strict maintenance and security controls, and are commonly found to be compromised by malware or used by attackers for anonymity. Furthermore, launching attacks from systems located in countries that may not be politically cooperative with the target country (in this case the UK) provides another level of protection as it is unlikely that local law enforcement will be able to negotiate access to the compromised machines. A report by 7Safe (2010) reported that approximately 10% of attacks under their investigation were launched from these countries, however it is also acknowledged that the attacks could have been launched from compromised machines.
3.2.2
Enumerating Permissions
Once an attacker has gained access to the it is likely that they will attempt to identify what level of access they have obtained. This could be through a number of methods, such as accessing group or policy information, through built in controls (such as whoami), or by simply attempting to perform actions and noting success or failure. It was clear that the attacker was attempting to enumerate the permissions of the through automated attacks in two distinct ways. The process of automatically enumerating permissions is not consistent with the theory that the had been accessed by a malicious ex-employee. This was based on information provided by Victim. Firstly the employee would have known what directories were accessible (as these would be the clients of this particular employee). Secondly, as the employee was not known to be technically proficient, it is unlikely that automated tools or scripts would be used.
CHAPTER 3. LOG FILE ANALYSIS
15
The first method that the attacker used to enumerate permissions was to identify what directories the had access to by attempting to systematically read the contents of subdirectories. The attack was performed by changing the current directory to each subdirectory in alphabetical order (identified by the CWD command) followed by an attempt to list the contents with the NLST (name list) command. Figure 3.4 shows a sample of the activity indicating an automated attack to enumerate permissions of the attacker . Figure 3.5 shows an example of where the automated script was performing recursive queries, as the attacker attempted to change to a directory that was actually a file. Information such as this may not provide a definite answer for what caused the incident but can nevertheless contribute towards building a profile of the attacker, such as the level of sophistication and resources available. An attack such as this is not particularly stealthy or sophisticated. 0228 16:23:59 (00001334) 69.73.X.X:32982 0228 16:23:59 (000012c4) 0228 16:23:59 (000012c4) 0228 16:23:59 (000012c4) 0228 16:23:59 (000012c4) 0228 16:23:59 (00001370) 69.73.X.X:33737 0228 16:24:00 (000012c4) 0228 16:24:00 (000012c4) 0228 16:24:00 (000012c4) 0228 16:24:00 (000012c4) 0228 16:24:00 (00001370) 69.73.X.X:36827 0228 16:24:00 (000012c4) 0228 16:24:00 (000012c4) 0228 16:24:00 (000012c4)
ftp.victim.com D(0) 69.73.X.X attacker PASV DATA connection to ftp.victim.com D(0) 69.73.X.X attacker CWD /Arnaco ftp.victim.com D(0) 69.73.X.X attacker PASV 69.73.X.X attacker:ftp.victim.com forced close listener socket ftp.victim.com D(0) 69.73.X.X attacker NLST ftp.victim.com D(0) 69.73.X.X attacker PASV DATA connection to ftp.victim.com D(0) 69.73.X.X attacker CWD /Arselis Tech ftp.victim.com D(0) 69.73.X.X attacker PASV 69.73.X.X attacker:ftp.victim.com forced close listener socket ftp.victim.com D(0) 69.73.X.X attacker NLST ftp.victim.com D(0) 69.73.X.X attacker PASV DATA connection to ftp.victim.com D(0) 69.73.X.X attacker CWD /ASOT LTD ftp.victim.com D(0) 69.73.X.X attacker PASV 69.73.X.X attacker:ftp.victim.com forced close listener socket
Figure 3.4: Successful CWD and NLST commands 0228 16:24:13 (000012c4) ftp.victim.com D(0) 69.73.X.X attacker CWD /Centaur/db_backup.zip 0228 16:24:13 (000012c4) ftp.victim.com U(NoFolder) 69.73.X.X attacker ERR:CWD /Centaur/db_backup.zip (Centaur) 0228 16:24:13 (000012c4) ftp.victim.com D(0) 69.73.X.X attacker CWD /Centaur/forecast.xlsx 0228 16:24:13 (000012c4) ftp.victim.com U(NoFolder) 69.73.X.X attacker ERR:CWD /Centaur/forecast.xlsx (Centaur)
Figure 3.5: Failed CWD and NLST commands The second method that the attacker used to enumerate permissions was to identify which of these directories could be written to. This was achieved by attempting to systematically and then immediately delete a file named tmp5842258422.html to various directories. It is thought that the attacker deleted this file immediately as a method of evading detection, as copies of the file left in various folders may arouse suspicion. It should be noted however that the
CHAPTER 3. LOG FILE ANALYSIS
16
attacker did not make particular efforts to conceal their presence elsewhere, and it is possible that this behaviour was typical of an automated tool that a lower skilled attacker could use. The file named above (and shown in Figure 3.6) could not be recovered for further analysis. The file may have been a dummy file (with minimal size) to allow the s to occur as quickly as possible. The file may also have contained malicious code to be executed by either the attacker or an unsuspecting victim at a later date. In this instance it can be seen that the file was approximately 18Kb in size - enough to contain a malicious web shell that could potentially be accessed through the web front end. 0229 04:30:24 (00001620) ftp.victim.com D(0) 87.118.X.X attacker PASV 0229 04:30:24 (00001620) 87.118.X.X attacker:ftp.victim.com forced close listener socket 0229 04:30:24 (00001620) ftp.victim.com D(0) 87.118.X.X attacker STOR tmp5842258422.html 0229 04:30:24 (00001620) ftp.victim.com D(0) 87.118.X.X attacker PASV DATA connection to 87.118.X.X:2640 0229 04:30:24 (00001620) ftp.victim.com S(0) 87.118.X.X attacker STOR tmp5842258422.html (D:/FTP-Data/tmp5842258422.html) (17946 bytes, 265ms) 0229 04:30:24 (00001620) ftp.victim.com D(0) 87.118.X.X attacker DELE tmp5842258422.html 0229 04:30:24 (00001620) ftp.victim.com S(0) 87.118.X.X attacker DELE tmp5842258422.html (D:/FTP-Data/tmp5842258422.html)
Figure 3.6: /Delete of sample file
3.2.3
File s
s to Client Folder As Victim was interested primarily in any activity that might have affected their client, a key part of the investigation was identifying whether the client’s information had been compromised. It was clear from the activity of enumerating permissions that the was capable of ing or ing information from the client folder, and so for Victim to provide a statement any activity must be rigorously identified. A search for any uses of the RETR command related to the client folder revealed no matches, indicating that the client’s intellectual property remained uncompromised. Whilst it is possible that the attacker was able to compromise the log entries and modify them (a scenario that should also be considered when performing an investigation), no evidence had been found to indicate this had occurred. Typically where an attacker has removed log file entries it is done in an obvious manner, with large chunks (or even the entirety) of the logs missing. Where an attacker has used more sophisticated means, such as only removing certain incriminating log entries, it is less likely still that they will create legitimate entries in it’s place. With a server such as this that has high amounts of activity, a gap in the timestamps of the log entries of even a minute would be suspicious. Finally, the reason that the log files were not assessed as compromised was that the
CHAPTER 3. LOG FILE ANALYSIS
17
amount of information about the attackers activity still remaining would suggest that they have not been modified at all, particularly as the would not have been able to modify the logs without elevating privileges. With no successful attempts noted for the client folder, it remained to determine whether any files had been removed or ed. Figure 3.7 shows an attempted and removal of the ftpchk3.php file to the client folder. 1203 19:08:28 1203 19:08:28 ftpchk3.php 1203 19:08:28 1203 19:08:28 ftpchk3.php
(000017b4) ftp.victim.com D(0) 194.186.X.X attacker STOR ftpchk3.php (000017b4) ftp.victim.com U(NoPermission) 194.186.X.X attacker ERR:STOR (D:/FTP-data/client/ftpchk3.php) (000017b4) ftp.victim.com D(0) 194.186.X.X attacker DELE ftpchk3.php (000017b4) ftp.victim.com U(NoFile) 194.186.X.X attacker ERR:DELE (D:/FTP-data/client/ftpchk3.php)
Figure 3.7: Attempted /Delete of ftpchk3.php file References at the RemoteDesktop.com blog noted the ftpchk3.php file being one of those dropped by an machine infected with a variant of the Bagle virus. The ftpchk3.php file reportedly tests functionality of sites before paving the way for further files to be ed containing malware (such as the er.Tibs.9.V trojan). Prabhakar (2009) reported in more detail on the effects of the virus, with particular note that the code attacks vulnerable web servers but the most common infection vector is through vulnerable FTP servers. Note that a removal script is supplied in Appendix A.1, and was also supplied to Victim as a precautionary measure. The attacks reported by Prabhakar indicate a similar scenario, with the file being ed to a vulnerable FTP server. However in this case it was noted that the file was not successful. Successful s Following on from this discovery, it became necessary to identify exactly what files the attacker had ed. This was achieved simply by chaining grep commands, i.e. grep -C 3 "attacker" * | grep -C 3 "STOR" > attacker_stor.txt, which would return any instances of the STOR command being used by the attacker . This also returned three lines of context surrounding any instances so that further information about an attack could be obtained and stored in the attacker stor.txt file. In this instance a number of issues were found with the above command. Firstly the results could be repeated a number of times due to the use of the -C flag denoting the surrounding context lines to be captured. Secondly, a large number of failed file attempts were noted and any successful s may have been lost in the noise, as the resulting file was approximately 3Gb in size. Rather than trawl through 30Gb of log files again it is more effective to refine the output using another scripting language. Figure 3.8 shows a script
CHAPTER 3. LOG FILE ANALYSIS
18
written in Ruby that can be used to check for successful attempts. The script reads the file line by line and prints any lines where the STOR command is used but an error (ERR) does not occur. The space in the command if lines[count].include?(" STOR") is intentionally added as an additional measure for extracting only successful storage attempts (as a failure is denoted by ERR:STOR). f = F i l e . open ( ’ a t t a c k e r s t o r . t x t ’ , ’ r ’ ) l i n e s =f . r e a d l i n e s f . close c o u n t =0 w h i l e ( c o u n t < l i n e s . s i z e −1) i f l i n e s [ c o u n t ] . i n c l u d e ? ( ” STOR” ) and n o t l i n e s [ c o u n t + 1 ] . i n c l u d e ? ( ”ERR” ) t h e n p r i n t l i n e s [ count ] end c o u n t +=1 end Figure 3.8: Ruby Script to identify successful s The result of this script revealed a number of instances of successful s by the attacker. Ordering these s by date revealed a trend, as shown in Figure 3.9. The first entry for each attempt shows when the transfer began. The second entry shows a successful , displaying the location of the file on the FTP server and the number of bytes and time taken for transfer. 1207 00:39:03 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker STOR AMOVIE.EXE 1207 00:39:10 (00001a6c) ftp.victim.com S(0) 92.62.X.X attacker STOR AMOVIE.EXE (D:/FTP-Data/WindowsServices/lotus/notes/AMOVIE.EXE) (1207808 bytes, 6625 ms) 1207 00:39:13 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker STOR kvoop.exe 1207 00:39:14 (00001a6c) ftp.victim.com S(0) 92.62.X.X attacker STOR kvoop.exe (D:/FTP-Data/WindowsServices/lotus/notes/kvoop.exe) (193024 bytes, 1250 ms) 1207 00:39:17 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker STOR ldapsearch.exe 1207 00:39:18 (00001a6c) ftp.victim.com S(0) 92.62.X.X attacker STOR ldapsearch.exe (D:/FTP-Data/WindowsServices/lotus/notes/ldapsearch.exe) (225844 bytes, 1313 ms) 1207 00:39:26 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker STOR np.exe 1207 00:39:30 (00001a6c) ftp.victim.com S(0) 92.62.X.X attacker STOR np.exe (D:/FTP-Data/WindowsServices/lotus/notes/np.exe) (647729 bytes, 3578 ms) 1207 00:39:41 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker STOR nca.exe 1207 00:39:45 (00001a6c) ftp.victim.com S(0) 92.62.X.X attacker STOR nca.exe (D:/FTP-Data/WindowsServices/lotus/notes/nca.exe) (913965 bytes, 4750 ms) 1207 00:39:48 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker STOR nchronos.exe 1207 00:39:49 (00001a6c) ftp.victim.com S(0) 92.62.X.X attacker STOR nchronos.exe (D:/FTP-Data/WindowsServices/lotus/notes/nchronos.exe) (176690 bytes, 1157 ms)
Figure 3.9: Successful s by the attacker
CHAPTER 3. LOG FILE ANALYSIS
19
As the only information available at this stage was the log files, some assumptions had to be made in determining the next stage of the investigation. Identifying the s by filename alone, the majority of the files appeared to be common Lotus Notes components. It was not immediately clear why an attacker would wish to files for running Lotus Notes, a client for managing business emails, calendars and applications. It is possible that the attacker was using this client to pivot further attacks (such as interfacing with a Lotus Domino server) or to trick a into installing malicious software. To fully understand the motives behind ing these files, the log files detailing these s were examined manually for further information. By examining the successful within the context of the surrounding activity, it became clear that the attacker was actually ing files that had been ed previously. Figure 3.10 shows an example of the activity related to the nnotesmm file. First the attacker retrieves the file from the FTP server, identified through the RETR command showing a successful followed by deletion with DELE. Seconds later a connection is initiated again, this time ing the file with the STOR command (as seen previously). 1207 00:41:14 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker SIZE nnotesmm.exe 1207 00:41:14 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker RETR nnotesmm.exe 1207 00:41:14 (0000125c) ftp.victim.com D(0) 92.62.X.X attacker PASV DATA connection to 92.62.X.X:3927 1207 00:41:16 (0000125c) ftp.victim.com S(0) 92.62.X.X attacker RETR nnotesmm.exe (D:/FTP-Data/WindowsServices/lotus/notes/nnotesmm.exe) (20530 bytes, 1094 ms) 1207 00:41:16 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker DELE nnotesmm.exe 1207 00:41:16 (00001a6c) ftp.victim.com S(0) 92.62.X.X attacker DELE nnotesmm.exe (D:/FTP-Data/WindowsServices/lotus/notes/nnotesmm.exe) 1207 00:41:16 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker TYPE I 1207 00:41:16 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker PASV 1207 00:41:16 (00001a6c) 92.62.X.X attacker:ftp.victim.com forced close listener socket 1207 00:41:17 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker STOR nnotesmm.exe 1207 00:41:17 (00001a6c) ftp.victim.com D(0) 92.62.X.X attacker PASV DATA connection to 92.62.X.X:4276 1207 00:41:18 (0000125c) 10.250.50.4:23654 connected to 10.250.50.7:21 1207 00:41:18 (00001a6c) ftp.victim.com S(0) 92.62.X.X attacker STOR nnotesmm.exe (D:/FTP-Data/WindowsServices/lotus/notes/nnotesmm.exe) (172594 bytes, 1031 ms)
Figure 3.10: Retrieval and by the attacker This activity constitutes a high risk for victim, and warranted further investigation. The fact that files have been ed and then replaced within seconds is not normal activity, and suggests that the files have been modified. Examining the data transferred showed that (for the file of the same name) 20530 bytes were ed and 172594 bytes were ed, again suggesting that the files have been modified. This modification was noted as highly likely to be malicious.
CHAPTER 3. LOG FILE ANALYSIS
3.3
20
Conclusions
As a result of the initial investigation, Victim was happy with the assessment that Client’s information had not been ed by a malicious attacker. Despite the fact that the suspicious file s had been raised, Victim no longer wished to continue the investigation and issued a statement to Client detailing the findings of the investigation as they relate to Client’s property. The following day, Victim examined the files that had been ed and immediately received an alert from their anti-virus, reporting an infection of TrojanHorse.Generic (shown in Figure 3.11). As the anti-virus engine couldn’t provide any further information, Victim supplied the infected files for inspection and the investigation continued.
Figure 3.11: Generic Trojan horse detected
Chapter 4
Malware Analysis The alert in Figure 3.11 shows how anti-virus can use detection techniques above normal ‘signature’ based detection to alert the to a threat. The anti-virus software in use detected that the virus has been packed (a form of encrypting viruses to make them undetectable) but was unable to obtain any identifiable information from either unpacking the virus or identifying the packer in use. In this situation the has been alerted to an infection, but actually identifying the contents will require a manual approach. Detecting malware behaviour can take place in various environments, typically falling into one of four stages: Static, Mounted, Live or Network. Figure 4.1 shows the natural progression of malware investigation in the four stages.
Figure 4.1: Malware Investigation Methodology Static analysis takes place when the infected file is placed into a non-functioning environment and analysed as raw data. The benefits that this has are that the virus cannot utilise any advanced techniques to evade detection and any unencrypted strings or headers can be easily identified. However, if the virus has been created using a packer (which is then decrypted at run time) there will be little information about its behaviour in the static analysis phase. Mounted analysis involves mounting the filesystem on which the infected files are stored as a logical drive within the investigation machine. This has the advantage that the file can be viewed in it’s native environment, allowing for file and folder permissions and metadata to be more easily examined. It is also easier to run the file through anti-virus engines to determine if the infection conforms to any known signatures. Whilst in the mounted stage, anti-virus scanning is typically limited to matching signatures (or definitions) from a known database and behavioural analysis is not as easily performed. This does however offer the
21
CHAPTER 4. MALWARE ANALYSIS
22
advantage that malware also cannot utilise evasion techniques such as hiding files or injecting processes into memory. Live analysis should occur within a sandboxed environment where the resources available can be strictly controlled. At this stage the infection can be set loose on a system and its effects monitored or controlled. Live analysis can also make use of anti-virus engines to detect malware like behaviour that may indicate the type of infection. Typically this stage makes use of virtual machines such as VMWare, as we will see in Section 4.3. Finally, the network analysis stage looks at any network traffic associated with the infection. When viruses are created for profit (rather than to annoy), they typically need to transfer information to be successful. This can be to infect further machines, to ‘phone home’ to a botnet controller or to send sensitive information, such as web browsing habits and keystrokes, to a remote machine. Consequently, monitoring network traffic from an infected machine with a tool such as Wireshark can narrow down the type of infection by looking for identifiable network traffic. In this situation, the limitations of the anti-virus software in use have been made clear by the fact that positive identification of the virus and it’s behaviour was not possible. This chapter discusses the benefits and drawbacks of automated anti-malware services before detailing a manual approach towards identifying virus behaviour.
4.1
Antivirus Considerations
Whilst there are clear benefits from having robust anti-malware provisions it is important to acknowledge the potential risk also associated with this software. Anti-malware services run with a high level of privilege and are usually intrinsically linked to the operation of the underlying Operating System. Therefore, a vulnerability in the technology itself (as have been identified previously in a wide range of anti-malware products) can potentially represent a significant threat to the system it is running on. It is therefore important that the use of malware protection on any system be based on a risk driven evaluation of its benefits against any potential vulnerability it introduces. The result of this is such that in the case of any system handling or processing supplied data (for example, file servers, email servers etc) the use of an anti-malware product is recommended. However, where systems are subject to strict software installation and management procedures a decision not to install anti-malware software may be the lower risk approach. Ultimately this decision can only be made with knowledge of operating procedures and the amount of control that can be practically exercised over systems within the company. Where these cannot be guaranteed the use of anti-malware solutions are often the lowest risk option. The use of a managed service for anti-malware control can offer significant benefits, however, this does also introduce an element of risk. In these
CHAPTER 4. MALWARE ANALYSIS
23
circumstances all of a company’s mail or web traffic will be ing through the managed service provider. This would result in them being a single point whereby a malicious attacker could gain access to all business data not protected using other means (for example PGP or HTTPS). The company should therefore acknowledge that the potential compromise of the third-party’s systems could expose a large amount of business data to a malicious attacker or insider. Whilst this risk is likely to be less than that exposed by not utilising such services it is important that this dependency be acknowledged and documented on the appropriate risk .
4.2
Static Analysis
Remote or physical access to the FTP server was limited due to difficulties in obtaining permissions from the hosting company - a common occurrence in commercial environments that highlights the importance of being fully prepared for an investigation. The files were instead extracted by Victim directly and couriered on an encrypted USB stick, which was recommended for two reasons. Firstly, it would prevent an unwitting infecting further machines should they come across the device. Secondly, it was not known at this stage whether the malware was custom written to target Victim, and thus could contain information sensitive to the organisation. The infected files were copied from the USB media onto a DVD-ROM to preserve their integrity before analysis could begin. The encryption program on the USB device was also not ed across all major operating systems and, as will be shown in this section, use of both Linux and Windows systems is required. A useful tool for analysing malware is a live environment, such as BackTrack or Helix. Live environments can be run entirely in volatile memory and, typically, leave no footprint on the host machine. This is particularly appealing for analysing malware as dummy or ‘goat’ machines can be used temporarily and any infections easily removed by simply shutting down the machine. As a number of live distributions are based on the Linux operating system they are inherently more resilient against malware attacks, including common exploitation of Windows features (such as infected autorun.inf files and executable Windows binaries). A common factor in files infected with a Trojan (as detected in Figure 3.11) is that they contain multiple PE headers. PE stands for Portable Executable and is a file format for executables and DLLs (Dynamic Link Libraries) used in Windows operating systems. Ordinary executables have a PE header at the start of the file, which can be identified by the file signature \x4D\x5A or MZ. Files infected with a Trojan however have multiple PE headers, as shown in Figure 4.2. A Trojan, named after the Trojan Horse of Greek mythology, is a delivery method for a malicious payload. The payload is hidden within another (usually legitimate) file, execution of which will drop the payload whilst executing the original program. From the point of view of the , the file behaves normall; they have no knowledge of an infection until abnormal behaviour or anti-virus alerts
CHAPTER 4. MALWARE ANALYSIS
24
Figure 4.2: PE Headers in infected file them. In Figure 4.2 the multiple file headers show the sections of the program designed to drop the payload, the original program and the malicious payload respectively. The first section of the program (from the start of the file to the PE header at 0x1200) contains code that has been designed to execute the original program as well as the malicious payload at 0x6232. Examination of a number of the infected files revealed that the first section was identical across all of them - something that would not be typical of a number of different programs. The PADDINGXX (seen just above the third and final PE header) appeared to be suspicious, particularly as this file had already been flagged as high risk. Use of repeated sequences is typically seen as padding in exploits for buffer overflow vulnerabilities (notably \x41), and so PADDINGXX could be part of an exploit in the malicious payload. Sashazur (2004) explains that PADDINGXX is seen as a side effect of using the UpdateResource function to correctly align sections within an executable. Whilst this provides a lead as to how this section of the executable was created and discredits the idea that this is an intentional buffer overflow exploit, it alone does not determine whether this section is the malicious payload or not. In actual fact, PADDINGXX is part of the original file Lotus Notes files; examining copies of known safe versions of Lotus Notes executables revealed that PADDINGXX is commonly seen. Finally, the last section of the executable is the malicious payload. There is
CHAPTER 4. MALWARE ANALYSIS
25
little information available by examining this section statically, as there are no clear text strings containing information about the payload. This could be by design or, more likely, that the payload is encrypted with a packer (described further in Chapter 5).
4.3
Live Analysis - Playing in the Sandbox
With the static analysis clearly showing that the files contain some malicious content, it becomes necessary to advance to the next stage of analysis to gain further information about the effects of an infection. In this situation there is no benefit to performing a mounted analysis (which would allow anti-virus scans to be performed) as the presence of malicious content has already been identified; analysis could move straight to a live environment. Arguably the best method of determining the behaviour of a virus infection is to deliberately infect a machine and monitor its effects. This has a high level of risk associated with it however, as an unidentified infection could cause havoc on a network and the investigation machine could be used to infect other machines on the network. The most appropriate method of monitoring an infection therefore is to use a sandbox - a safe environment with limited connectivity that can be closely controlled. In this case, a virtual machine using VMWare was created for the purposes of analysing the infection. VMWare has a number of features that can be useful in analysing a virus, such as the ability to control the resources available to the machine and the ability to create and revert to snapshots of the machine. Another set of tools that are useful in analysing malware are the Sysinternals suite of tools. In particular RegMon, FileMon and ProcMon allow any accesses to registry keys, files and processes to be monitored and logged. As there are large amounts of registry and file accesses through normal operations of a Windows system it is important when using these tools that the filters are set appropriately. Filters that are not restrictive can lead to information overload and any malicious activity is lost in the noise, however filters that are too restrictive may not capture vital information about the behaviour of the infection.
4.3.1
Dropping the Payload
Having let the uninfected system run for a short time and filtering out any standard system processes, a malicious file was selected and executed. A number of locations were scanned by the process and directory contents listed (a sign of the virus establishing the system’s directory structure). Of particular note, was a file created at the following location: C:\Documents and Settings\%%\Local Settings\Temp\1.tmp
CHAPTER 4. MALWARE ANALYSIS
26
Figure 4.3 shows the output from FileMon, one of the Sysinternals tools, capturing the creation of a file 12.tmp in the location listed above. The file 12.tmp is identical to the other files dropped by the infection, as discussed later in this section.
Figure 4.3: FileMon output showing payload creation The file could easily be overlooked as a normal temporary file, given it’s location and filename. However, as the results from the Sysinternals tools show this file is flagged as suspicious. FileAlyzer is a useful tool for Windows, developed by Safer Networking, that allows files to be analysed in detail. It also allows the raw data to be analysed and, as Figure 4.4 shows, the 1.tmp file has a PE header; the file is an executable and not an ordinary temporary file.
Figure 4.4: FileAlyzer output showing PE header Infecting the machine multiple times (through executing the infected file multiple times) is a useful method for determining virus behaviour. Comparing the results of a reinfection can reveal whether any randomness in code, location, filename and other attributes are present. In this case the filename was found to change, as copies of 2.tmp, 3.tmp, 4.tmp, 16.tmp, C.tmp and D.tmp were all seen as a result of reinfection where 1.tmp already existed. One method of identifying files is by a unique signature or hash; a hash of a file is an (effectively) unique value that is generated based upon the raw data of the file. This hash will be calculated exactly the same when when identical input data is supplied, as the algorithm of a hashing function is mathematically well-defined. A hashing function is also defined by the property that a change in input (no matter how insignificant) will result in a very different output. Hashing files is therefore a useful way of determining whether two files are identical, as they will have a matching hash value.
CHAPTER 4. MALWARE ANALYSIS
27
The MD5 hash (a commonly used algorithm) of all the files examined in the Temp folder was: c5ad1457dba612bbd7751aa5354075b1. It is acknowledged that MD5 contains known flaws which can, under special conditions, be exploited to make malicious modifications to files that will still integrity checks; however, for the purposes of swiftly comparing file contents it is suitable.
4.3.2
Vulnerability Exploitation
The process 1.tmp attempted to access a number of registry keys. Some of these keys were core to the Windows operating system (such as Winlogon, Terminal Server and Diagnostics) and are typically accessed by malware to determine information such as the current privileges associated with the process, what platform it has been executed on and other information that may be relevant to exploitation. In particular access attempts to the following key were noted: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\GRE_Initialize
This entry is related to a buffer overflow vulnerability, exploitation of which could allow escalation of privileges or remote code execution. The presence of this key indicates a system that is vulnerable to CVE-2008-1087, and would be a likely candidate for malware to target in an attempt to gain SYSTEM level access. The operating system in use as the sandbox was Windows XP SP3 with all recent security updates applied, and as such this registry key did not exist. When manually spawning the process (by executing 1.tmp or similar), access attempts for this key were noted followed shortly by the process being killed, potentially indicating a failed exploitation attempt. Figure 4.5 shows access attempts by the process 2.tmp - the reference to nvaux32 should also be noted as it becomes relevant in Chapter 5.
Figure 4.5: Access attempts to the GRE Initialise registry key In order to investigate further, a vulnerable machine is needed. An unpatched version of Windows 2003 was installed as a virtual machine and the infected files transferred. Examination of the registry revealed that the GRE Initialize key existed and the system was vulnerable. The various Sysinternals tools were similarly used to monitor activity and the infection was launched. The process terminated upon accessing the GRE Initialize key in the same manner as before.
CHAPTER 4. MALWARE ANALYSIS
4.4
28
Conclusions
Some of the behaviour of the infection has been identified, and a likely target for exploitation has been seen (though not actively exploited in the tests performed). Some viruses have the capability of detecting virtualisation technology and will modify their behaviour accordingly. Whether this failure to infect the sandbox further is a method of preventing analysis, a bug in the malware or simply a result of virtualisation not emulating a live machine exactly, the full extent of the infection cannot be established solely by monitoring it’s behaviour. The following chapter explores the techniques used to identify the behaviour of the virus through reverse engineering and debugging.
Chapter 5
Reverse Engineering and Unpacking As malware often goes through many stages of modifications and updates, it is common to see malware in the wild that has similar behaviour and functions to those that came previously. When limited time is available, it is not always necessary to fully identify every aspect of the malware’s behaviour but to establish what family it belongs to. From there it can be determined whether the observed behaviour is consistent with descriptions provided by virus databases and other malware researchers to provide a higher level of assurance as to the nature of the infection. In this example it becomes apparent that a full analysis was not necessary. After defeating anti-debugging defenses and obtaining an unpacked copy of the infection it was possible to obtain clues that lead to the identification of the virus, with final confirmation provided by anti-virus. The tools used in this section are Immunity Debugger and IDA Pro, commercial tools that can be used for debugging a variety of programs and are extremely useful for identifying virus behaviour.
5.1
Packers, Wrappers and Binders
The packer, wrapper and binder are sometimes used interchangeably to represent a method of creating Trojan infections, whereby two (or more) executables are wrapped together. In this context the term packer has a specific meaning as the methods of compression and decompression (packing and unpacking) are used to evade detection by anti-virus engines that look for known signatures. It can also slow down reverse engineering attempts. Unpacking the data at runtime, sometimes only in small increments, ensures that the function of the malicious code is exposed as minimally as possibly. Parker (2007) discusses packers as part of an introduction to reverse engineering malware, in particular the example shows the UPX packer. The UPX 29
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
30
packer is relatively unsophisticated, as it was not designed to evade detection and it can be fully unpacked using the same tool as created it. In addition its use is disclosed in plain-text within the file. Nevertheless, the methods used to extract further information about the contents of the packed file are still relevant to other malware investigations. Tools such as PEiD can be used to detect the presence of common packers through both known signatures and common behaviours (such as known entry points or file offsets). The majority of modern anti-virus programs will also attempt to positively identify the packer used when encountering packed malware and, where possible, unpack the data. As seen in Figure 3.11 the anti-virus program in use was not able to do this. Thus the goal of the investigation at this stage was to capture any unpacked data and examine it for any identifying information.
5.2 5.2.1
Unpacking the Virus Anti-Debugging
An important consideration when reverse engineering programs, particularly when dealing with malware, is the presence of anti-debugging defenses. Typically these defenses are centred around methods of detecting whether the process is in a debugger but, like other malware anti-detection mechanisms, many more sophisticated approaches exist. Falliere (2007) provides an overview of some of the common anti-debugging defenses and ways of defeating them, broadly categorised as: • Memory discrepancies • System discrepancies • U anti-debug Exploiting memory discrepancies for anti-debugging are given as checks for flags and returned values, such as the result of the kernel32!IsDebuggerPresent Windows API returning a 1 or the PEB!IsDebugged flag being set. System discrepancies describe behaviour that is different when running inside a debugger, for example the SetUnhandledExceptionFilter() will call the exception filter unless the program is being debugged (at which point the process will terminate). U anti-debugging is dependent on the architecture in use but exploits U instructions to fool debuggers, for example INT3 (an interrupt command) is often recognised by debuggers as a break-point and can alter the course of the program in weaker debuggers. During the investigation, some deviations from common coding practices were noted which are discussed in Section 5.2.2. These deviations resulted in the process
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
31
taking a path that was difficult to follow, and it is thought that these artefacts were intentionally placed to make debugging more difficult. An example of one of the methods employed to make debugging more difficult is seen in Figure 5.1. The function at 0042290D is called, followed shortly by a RETN and the jump is then taken. This section could be reordered to perform effectively the same result, however it makes understanding the process slightly more difficult when reverse engineering. Whilst not a particularly sophisticated protection method, defying various coding conventions can slow down the investigation of a piece of malware.
Figure 5.1: Function call and jump ordering Whilst debugging, the process would randomly terminate forcing it to be restarted. This may be a result of poor coding or intentional anti-debugging defenses such as monitoring U cycles to detect when the process is significantly slowed down for manual examination. The jumps that had to be byed (see Section 5.2.3) to avoid invalid code may also be symptomatic of anti-debugging techniques.
5.2.2
Manual Coding Artefacts
Whilst debugging the virus, a number of deviations from common coding conventions were identified. The presence of these artefacts suggests that areas of the code had been written manually, as they would not typically be seen when using common compilers. These artefacts give an insight into the coding practices of the creator of the packer (which in this case could be a different individual to the attacker), and can also help to understand the level of sophistication of such a packer. Use of NOP The NOP instruction (short for ‘No OPeration’ or ‘No Operation Performed’) effectively performs no action. It can be used legitimately, particularly as part of a development process to set placeholders or for timing purposes, however the instruction is not commonly seen in production software. Optimisation performed by common compilers will typically detect and remove so called ‘redundant code’ i.e. code that has no effect. NOP instructions may also be used in sequence to create a ‘NOP sled’ or ‘NOP slide’ as a technique of improving the success of exploiting vulnerable software.
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
32
Solar Eclipse (2002) gives an example from the Honeynet Project of a NOP slide in use against a Solaris machine on SPARC architecture. It should be noted at this point that as SPARC architecture is different from x86 a NOP instruction does not exist (though other instructions are used as NOPs), though the concept of a NOP slide is still valid. As the location of shell code may be unknown (and may not even be a static location), a string of NOP instructions placed before the estimated location of the shell code will suffice. Overwriting a return address (for example by a buffer overflow) does not therefore require the exact location of the shell code to be written - the jump can be taken to the location of the NOP codes and then ‘slide’ through the NOPs until the shell code is reached. As a number of modern intrusion detection systems will attempt to detect long sequences of NOP codes on the stack, other instructions can be used as long as they do not jeopardise the correct running of the shell code. The infection had a number of areas where NOP instructions were used, however none were of a significant amount to be classed as a NOP slide. In particular it was noted that where conditional jump instructions were used (such as JNZ - jump if zero flag is not set, or JE - jump if zero flag is set) the jump was typically to a NOP instruction. This could have been used as a placeholder for other code as part of the development process, or may simply be used as a silent marker used in debugging to indicate that the program is on the right path. Nevertheless it gives an insight into the coding practices used in creating this packer. Misuse of Instructions In a number of areas it is apparent that code has been custom written as typical conventions are not followed. This also includes areas of code that common compilers would attempt to optimise and thus suggest a manual approach has been taken. Figure 5.2 shows an example where the INC (increment) function was used on the ECX to increase its value by 1 a total of four times. The purpose of this particular action is explained further in Section 5.2.4, however this is clearly not optimal. The use of an instruction such as ‘ADD ECX, 4’ would suffice.
Figure 5.2: Use of the INC function multiple times Within the unpacking routine (described further in Section 5.2.4) the code is unpacked four bytes (a DWORD) at a time. Whilst this is not a misuse of instructions in itself, it is common to handle such a routine a single byte at a time.
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
33
Reference to kERNeL32.Dll An instruction pushed onto the stack was discovered to have an uncommon naming convention. References to both kernel32.dll and KERNEL32.DLL (a base Windows API) are common, however the peculiar capitalisation shown in Figure 5.3 is not. As a number of signature based virus detection tools may look for references to kernel32.dll (or KERNEL32.dll) this could be a method used to evade detection. In any case, this is again not a common coding practice.
Figure 5.3: A pointer on the stack referencing kERNeL32.Dll
5.2.3
Bying Invalid Instructions
As discussed at the beginning of this chapter, the use of a debugger can cause programs and processes to behave in unpredictable ways. The use of a debugger on an infected machine to identify the behaviour of a program with anti-debugging defenses can introduce even more unpredictable behaviour. The first instance of this behaviour can be seen in Figure 5.4 where, shortly after executing the program, the process terminates as it comes across invalid code.
Figure 5.4: Invalid opcode
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
34
As the goal of the exercise is obtain identifiable information from the program, any invalid instructions can be byed or circumvented. Whilst this may seem like valuable information is lost when the behaviour is not fully understood, it is useful as a type of brute-force approach to get the program into a state where the unpacking begins. This particular invalid opcode can be byed by setting the EIP (the instruction pointer) to the next line of code 00401050. In Immunity this can be set simply by right-clicking on the line of code and selecting ‘New origin here’. Bying this section of invalid code allows the program to continue until a jump is reached. Figure 5.5 shows a JE instruction to the address 00401164, the code for which can be seen in Figure 5.6. Again this code is invalid and causes the process to terminate, therefore the jump at 004011DD must be byed. In Figure 5.5 the top-right corner shows the Zero Flag is set - the Zero Flag is set to 1 if the result of an instruction was zero or false. As the Zero Flag is set the jump will be taken the condition is satisfied (this is also shown in Immunity by the message ‘Jump is taken’). To by this jump a breakpoint should be set, instructing the debugger to await further instruction, and the Zero Flag set to 0. The condition for the JE instruction will not be met and the jump will not be taken.
Figure 5.5: Jump to invalid opcode
Figure 5.6: Invalid opcode On bying this jump the process continues until a similar condition is met, in this case the Zero Flag is not set and a JNZ instruction exists. The Graphs function in Immunity provides a visualisation of the branches that the process can take depending on whether the jump is taken or not. Figure 5.7 shows the path taken in red if the jump is not taken (the current option) and in green if the jump is taken. Immediately it is obvious that not taking the jump will result in termination of the process, as the function kernel32.ExitProcess is called. Again the Zero Flag must be modified and, this time, the jump is taken. The final set of invalid instructions forms an infinite loop unless byed by setting a new origin, as described above. The instruction at 00422891 calls a function at address 0042298C (see Figure 5.8). Rather than ending with a RETN
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
35
Figure 5.7: Branch structure showing jump results (return) instruction and continuing, the function reaches a JMP (unconditional jump) which then calls the same function again, looping continuously. In this case the jump at 004289A is never reached, but it must exist for a reason. Examining the location of the jump reveals that it leads to a NOP instruction at 004228A8 - from the knowledge gathered so far about the coders use of NOP instructions denoting useful code blocks, it is likely that taking this jump will result in the process continuing. Again, setting the origin either to the NOP location or the jump itself continues the process and in this case leads to a critical part of the program - the unpacker.
Figure 5.8: Infinite function loop
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
5.2.4
36
The Unpacking Loop
The section of code responsible for unpacking the data can be seen in Figure 5.9
Figure 5.9: The unpacking loop One point to note is that the command INC ECX appears four times, incrementing the ECX by 1 each time. It is not clear as to why this would be performed instead of a more efficient instruction, such as ‘ADD ECX,4’. It is possible that this is set to deal with timing issues or may simply be a sign of the coder’s thought pattern. The result of this action is clear however, as the value of ECX is used to set the memory address of the data to unpack (not shown). Thus with each iteration, four bytes (a DWORD) are decrypted at a time. It is clear that the code is changing with each iteration and at first it may not be clear what this unpacked data represents. After several iterations a text string begins to appear, referencing “Lighty Compressor” (as seen in Figure 5.10). Lighty Compressor is the name of the packer used to create this virus - a vital clue, as a number of unpacking tools have been developed for common packers. Unfortunately in this case an unpacker does not exist in the public domain for Lighty Compressor, necessitating this manual approach. Martyanov (2008) details some of the uses of Lighty Compressor as a packer in a Live Journal blog on multiple occasions in reference to virus infections. In reply to a comment placed by another poster, Martyanov explains that an unpacker could not be found in the public domain but that a manual approach should be successful once the line “Lighty Compressor” is stored in memory. This is in line with the results seen so far.
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
37
Figure 5.10: Unpacked code revealing “Lighty Compressor”
5.2.5
Identification
Allowing the unpacking routine to continue allows more data to be viewed in it’s unpacked format. As this unpacked data resides in memory, taking a snapshot or dump of the current memory state allows the unpacked data to be examined further. Loading this memory dump in IDA Pro provides a number of options for further investigation, including the ability to generate a list of all the strings identified in the process. These strings often provide further information about the functions being called, file references and sometimes identifiable information such as IP addresses, server names or names and s.
Figure 5.11: Strings within the unpacked data Figure 5.11 shows some of the strings that are particularly interesting, as they reference files such as nvaux32.dll, aston.mt and dllcache\\32.dll (a non-standard location for the 32.dll file). A logical next step is to research the files named by these strings and determine whether they are referenced by any known and previously identified viruses. Whilst viruses are typically seen in multiple different mutations and may behave differently in different situations, this technique can be used to narrow down the search significantly. Once common infections have been identified, the behaviour
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
38
of different variants can be explored to see if it conforms to the behaviour seen in Chapter 4. Virus databases of common anti-virus vendors is a useful place to find information on infections, the McAfee ‘Threat Resources’ site matches a number of the strings identified with the W32/Mariofev worm. Furthermore, a report from Prevx on the NVAUX32.DLL file identifies behaviour consistent with that noted in Chapter 4 whereby *.tmp files are dropped in the Temp folder. Having identified a strong candidate for the virus, it is important to positively identify it. The common locations for the known malicious files are listed within the virus reports noted above, and so it remains to examine these locations for the expected artefacts. Figure 5.12 shows that the nvaux32.dll existed within the system32 folder as expected; a virus scan of this file was finally able to detect and identify it as an infection of the Mariofev worm.
Figure 5.12: nvaux32.dll within the system32 folder Whilst the exact methods the worm uses to spread may differ slightly in this particular incarnation, the following attack vectors are common to a Mariofev infection: • Dictionary attack against network file shares • Attempts to establish network connections to HTTP servers based in Russia • Attempting to disable common anti-virus and virtualisation software (such as VMWare)
CHAPTER 5. REVERSE ENGINEERING AND UNPACKING
5.3
39
Conclusions
The techniques presented in this chapter can be used in the future for dealing with viruses that cannot be unpacked and identified by anti-virus software. It may not be known that Lighty Compressor has been used to pack a piece of malware until it’s protections have been byed, however the presence of artefacts presented in Section 5.2.2 could suggest that Lighty Compressor has been used. Whether Lighty Compressor has been used or not, the ‘brute force’ approach to reverse engineering malware - forcing the code towards a state where the unpacking routine begins, can lead to quick (if not complete) results. The worm in this case has been packed with another file as a Trojan and therefore requires interaction to unleash it. Once unleashed however the worm can spread without requiring any interaction through the methods outline above. To determine whether the worm has been unleashed, a final scan of the log files for any instances of successful s of the infected files was performed. Figure 5.13 shows how a list of all uses of the RETR command can be created with grep and then examined for any matches with a list of known infected files. This was performed on a subset of the log files starting from the time after which the infected files were first ed to the server. :˜$ grep RETR R ./ > retr :˜$ for FILE in ‘cat infected_files.txt‘; do grep $FILE retr done :˜$
Figure 5.13: Identification of any s of infected files In this case Victim has been fortunate - no successful s of the infected files can be seen from the log files and the risk that this poses to Victim is now significantly lower.
Chapter 6
Remedial Actions A number of activities performed by Victim could have prevented this incident from occurring. A formal ers and leavers process to fully identify the level of access required and granted should be established. In addition, any further access requests should also be formalised and maintained on record. This will ensure that a complete list of all s and access levels are available when a staff member leaves and the s can be disabled appropriately. This type of process typically requires coordination with HR, as they must regularly inform the IT department of ers and leavers. Regular monitoring of log files may have alerted Victim to this incident earlier, as some of the identified activity suggesting automated attacks were clearly not in line with legitimate activity. It is anticipated that manual monitoring of such detailed log files may be time consuming, however a daily review of WS FTP generated reports will allow Victim to spot behavioural trends that could point to malicious activity. The use of file integrity monitoring software would also have alerted Victim to the modification of files that was the main impact of the reported incident. File monitoring software implemented for an FTP server (where constant file changes are normal activity) may not be as effective as other applications, however a file integrity alert could be used as a trigger to identify files for automated virus scanning. In this way a file that has been modified to contain malware would be identified. To fully ensure that this incident has been contained there are a number of activities that Victim must perform. Despite the fact that no log entries indicated that the infected files had been ed via FTP, it is still important that any at risk machines are identified. This should follow an umbrella approach where any person or machine with physical or network access to the server is considered, and the scope of the risk is methodically narrowed down. These at risk machines should then be subject to an anti-virus scan with up to date anti-virus definitions. It is also essential that these machines are manually examined for existence of the files detailed in this report; where possible this should be performed by
40
CHAPTER 6. REMEDIAL ACTIONS
41
booting the machine into safe mode so that any infections cannot utilise advanced anti-detection techniques. Furthermore, the FTP server should be treated as compromised until it can be securely wiped and rebuilt using the latest stable and secure software. This action will ensure that the known vulnerabilities associated with the server are removed and that traces of any malicious software that may have been installed on the server are removed.
Chapter 7
Conclusions A log file analysis is a good place to start an investigation, as with carefully constructed queries a large amount of information can be gathered from them. Being such a valuable source of information it is common to find them corrupted or destroyed, and in this case it is fortunate they have not been compromised further. Log files should not be used to assess the current state of a system without a direct examination, but are the first step in narrowing down the window of investigation. In this case the log files provided an alert to the possibility of malicious files existing on the server. It is not known how the attacker came to acquire credentials to a valid . Whilst a number of theories have been discussed, ultimately it is irrelevant as long as Victim engages in a program of remediation. By mitigating against likely attacks and educating staff in secure working practices, Victim can ensure that the possibility of further compromise is minimised. A malware analysis that encomes all four stages of the analysis methodology - static, mounted, live and network has a good chance of identifying the behaviour of malware. In cases where the malware cannot be positively identified by anti-virus, the techniques presented can build up a picture of its behaviour such that it can at the very least be narrowed down to particular virus family. The top anti-virus vendors have large research teams dedicated to capturing and identifying malware, and it is likely the malware discovered is not being seen in the wild for the first time. A combination of tools and techniques to identify this behaviour is normally enough to find a match against previous analysis. The use of reverse engineering techniques allowed both the virus and the packer used to evade detection to be positively identified. The efforts required to gain this information will be returned many times over as Victim is able to focus attention on monitoring the specific infection vectors used by the virus, by monitoring access to specific web addresses and brute force attacks against file shares. Even experienced breach investigators should expect to learn something new from each investigation and until a successful unpacking tool is released for Lighty Compressor, the ability to manually by the protections used will significantly benefit any future
42
CHAPTER 7. CONCLUSIONS
43
investigations. Future work in this area could be focussed on investigating the Lighty Compressor further and developing an automatic unpacker that can both detect the use of Lighty Compressor and provide an investigator with the unpacked content. On this occasion, Victim has been fortunate that further infection did not occurr. Furthermore, the breach showed a number of changes that need to made to Victim’s security program in a harsh method of highlighting current weaknesses. Security weaknesses in commercial organisations often stem from a lack of at board level and for all the recommendations that can be made by external security assessors, nothing raises awareness quite like a breach. Whilst the cost of a breach can be high, both in of system repair and staff time, the assurance gained from knowing the attacker has not penetrated further and the lessons learned from the incident are invaluable.
Appendix A
Code A.1
FTPCHK3.php Removal
#!/usr/bin/perl # http://digitalpbk.blogspot.com/2009/10/ftpchk3-virus-php-pl-hacked-website.html use strict; ‘grep -Rn aWYoIWlzc2V0KCRiMHNyMSkpe2Z1bmN0aW9u * | cut -d ’:’ -f 1 > listofinfected‘; open FP,"listofinfected"; my $file; while($file =
){ print "Testing $file ... "; chomp($file); if(-e ($file)){ open VI,$file; my @filecon =
; close VI; if($filecon[0] =˜ m/aWYoIWlzc2V0KCRiMHNyMSkpe2Z1bmN0aW9u/){ $filecon[0] =˜ s/(<\?.*?\"; my $file; while($file =
){
44
APPENDIX A. CODE
print "Testing $file ... "; chomp($file); if(-e ($file)){ open VI,$file; my @filecon =
; close VI; my $fc = (’’,@filecon); $fc =˜ s|document.write(’<script(.*?)ftpchk3.php(.*)script>’);||sig; $fc =˜ s|<script[\s]+src="?http(.*?)ftpchk3.php(.*?)script>||sig;
rename($file,$file.".infected"); open VI,">$file"; print VI $fc; close VI; print $file." Fixed !!"; } print "\n"; } close(FP);
45
References [1] 7Safe. UK Security Breach Investigations Report. 2010. [2] 7Safe and AO. Evidence. 2007. [3] BitDefender.
Good Practice Guide for Computer-Based Electronic BitDefender
Defense
Center,
2010.
http://www.bitdefender.co.uk/site/VirusInfo/.
[4] B. Carrier. File system forensic analysis. Addison-Wesley Professional, 2005. [5] S.K. Cha, I. Moraru, J. Jang, J. Truelove, D. Brumley, and D.G. Andersen. SplitScreen: Enabling efficient, distributed malware detection. Proc. 7th USENIX NSDI, San Jose, CA, 2010. [6] CVE. Vulnerabilities in GDI Allows Code Execution (MS08-021), 2008. http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2008-1087. [7] e-fense. Helix, 2010. http://www.e-fense.com/helix/. [8] Solar Eclipse. Honeynet Project Scan of the Month for April 2002, 2002. http://www.phreedom.org/solar/honeynet/scan20/scan20.html. [9] N. Falliere. Windows Anti-Debug Reference. Retrieved October, 1, 2007. [10] Ipswitch
FTP.
Ipswitch
FTP
Server,
2010.
http://www.ipswitchft.com/Business//WsFtpServer/index.aspx.
[11] J. Granick and K. Opsahl. Computer Crim Year in Review. In BlackHat Briefings, 2009. [12] T. Holt, M Kilger, D. Strumsky, and O Smirnova. Identifying, Exploring, and Predicting Threats in the Russian Hacker Community. In Defcon 17, 2009. [13] Jibz, Qwerton, snaker, and xineohP. Peid, 2007. http://www.peid.info. [14] C. Kolbitsch, P.M. Comparetti, C. Kruegel, E. Kirda, X. Zhou, X.F. Wang, and UC Santa Barbara. Effective and efficient malware detection at the end host. In 18th Usenix Security Symposium, 2009.
46
REFERENCES [15] V.
47
Martyanov.
Vladimir
martyanov’s
live
journal,
2008.
http://v-martyanov.livejournal.com/1738.html.
[16] McAfee.
Mcafee
threat
resources,
2008.
http://vil.nai.com/vil/content/v 144571.htm.
[17] Trend Micro. TrendLabs Malware Blog, 2010. http://blog.trendmicro.com/. [18] Microsoft.
Microsoft
Security
Bulletin,
2010.
http://www.microsoft.com/technet/security/current.aspx.
[19] Safer
Networking.
FileAlyzer,
2008.
http://www.safer-networking.org/en/filealyzer/index.html.
[20] A. One. Smashing the stack for fun and profit. 7(49):1996–11, 1996. [21] D.
Parker.
Reverse
engineering
Phrack magazine, malware,
2007.
http://www.windowsecurity.com/articles/Reverse-Engineering-Malware-Part4.html.
[22] N. Percoco and J. Ilyas. Malware Freak Show. In Defcon 17, 2009. [23] A. Prabhakar. Ftpchk3 : Virus that adds malicious scripts to your website., 2009. http://digitalpbk.blogspot.com. [24] M.D. Preda, M. Christodorescu, S. Jha, and S. Debray. A semantics-based approach to malware detection. ACM Transactions on Programming Languages and Systems (TOPLAS), 30(5):25, 2008. [25] Prevx.
Prevx
file
investigation
report,
2008.
http://www.prevx.com/filenames/20878752371790299-X1/NVAUX32.DLL.html.
[26] RemoteDesktop.com.
Bagle’s
back,
2010.
http://www.remotedesktop.com/?p=30.
[27] M. Richard and M. Ligh. Making Fun of your Malware. In Defcon 17, 2009. [28] R.
Russinovich.
Microsoft
Sysinternals), http://technet.microsoft.com/en-us/sysinternals/default.aspx.
2010.
[29] Sashazur.
UdateResource to change a string resource, 2004. http://www.codeproject.com/kb/string/updatestringresource.aspx.
[30] Net Security.
Workers stealing data for competitive edge, 2009. http://www.net-security.org/secworld.php?id=8534.
[31] Offensive
Security.
http://www.backtrack-linux.org/.
BackTrack-Linux.org,
2010.