Top-Rated Free Essay
Preview

Bay Raif Sarıca

Good Essays
7918 Words
Grammar
Grammar
Plagiarism
Plagiarism
Writing
Writing
Score
Score
Bay Raif Sarıca
Computer Forensics

1

1. INTRODUCTION
The proliferation of computer use in today‘s networked society is creating some complex side effects in the application of the age-old greed, jealousy, and revenge.
Criminals are becoming much more sophisticated in committing crimes. Computers are being encountered in almost every type of criminal activity. Gangs use Computers to clone mobile telephones and to re-encode credit cards. Drug dealers use Computers to store their transaction ledgers. Child pornography distributors use the Internet to peddle and trade their wares.
Fraud schemes have been advertised on the Internet. Counterfeiters and forgers use computers to make passable copies of paper currency or counterfeit cashiers checks, and to create realistic looking false identification. In addition, information stored in computers has become the target of criminal activity. Information such as social security and credit card numbers, intellectual contract information, of malicious

property,

proprietary

information,

classified documents, etc., have been targeted. Further, the threat

destruction of software, employee

sexual harassment, and Commercial

and

Personnel problems are manifesting

sabotage, identity theft, blackmail,

government espionage

themselves in

is

on

the rise.

the automated environment with

inappropriate or unauthorized use complaints resulting in lawsuits against employers as well as loss of proprietary information costing millions of dollars. All of this has led to an explosion in the number and complexity of computers and computer systems encountered in the course of criminal or internal investigations and the subsequent seizure of computer systems and stored electronic communications.
Computer evidence has become a ‗fact of life' for essentially all law enforcement agencies and many are just beginning to explore their options in dealing with this new venue. Almost overnight, personal computers have changed the way the world does business. They have also changed the world‘s view of evidence because computers are used more and more as tools in the commission of ‗traditional' crimes. Evidence relative to embezzlement, theft, extortion and even murder has been discovered on personal computers. This new technology twist in crime patterns has brought computer evidence to the forefront in law enforcement circles.

Nehru College Of Engineering And Research centre

Computer Forensics

2

2. WHAT IS COMPUTER FORENSICS?
Computer forensics is simply the application of disciplined investigative techniques in the automated environment and the search, discovery, and analysis of potential evidence. It is the method used to investigate and analyze data maintained on or retrieved from electronic data storage media for the purposes of presentation in a court of law, civil or administrative proceeding. Evidence may be sought in a wide range of computer crime or misuse cases.
Computer forensics is rapidly becoming a science recognized on a par with other forensic sciences by the legal and law enforcement communities. As this trend continues, it will become even more important to handle and examine computer evidence properly.
Not every department or organization has the resources to have trained computer forensic specialists on staff.
Definitions:
The process of identifying, preserving, analyzing and presenting digital evidence in a manner that is legally acceptable.‖ (McKemmish, 1999)
―Gathering and analyzing data in a manner as freedom distortion or bias as possible to reconstruct data or what has happened in the past on a system.‖ (Farmer & Vennema,
1999)

Nehru College Of Engineering And Research centre

Computer Forensics

3

3. THE ORIGINS AND HISTORY OF COMPUTER FORENSICS
As Carrie Morgan Whitcomb, director of the National Center for Forensic Science in the
United States puts it:
Computer forensic science is largely a response to a demand for service from the law enforcement community.
The first known employment of computer forensic techniques was, however, by the U.S. military and intelligence agencies in the 1970s. Very little is known about these activities due to their occurrence in classified environments (Michael R. Anderson, private communication, March 23, 2002). However, it is logical to assume that they had a counterintelligence focus using mainframe computer systems.
Some of the first government agencies with an overt and publicly visible requirement to carry out forensics on external systems relating to criminal offences were taxation and revenue collection agencies including the U.S. Internal Revenue Service
Criminal Investigations Division (IRS-CID) and Revenue Canada.
In looking at the state of computer forensics in law enforcement today, or as it should be more correctly termed now digital evidence recovery, it is useful to examine its beginning and its progression. It was not until the 1980s that the advent of the IBM PC and its many variants introduced new problems into the world of investigation: volume of data, ability to alter data without trace, and the ability to hide or delete data. Computing was made available to the masses that naturally included the criminal fraternity. It became apparent that a level of specialist knowledge was needed to investigate this new technology and thus was born the science of ‗‗Forensic Computer Examination.‘‘
As previously mentioned, in North America, organizations that were initially most active in the computer forensic field from the mid-1980s to the early 1990s were the IRSCID and Revenue Canada. In 1984, the FBI had established the Computer Analysis and
Response Team (CART), based out of FBI Headquarters in Washington, District of

Nehru College Of Engineering And Research centre

Computer Forensics

4

Columbia, to provide computer forensic support; however, it did not actually become fully operational until 1991.
No specific forensic tools existed in the 1980s, so existing data protection and recovery suites of utilities, such as Peter Norton Inc. The Norton‘s Utilities, Central Point
Software PC Tools and Paul Mace Software Mace Utilities were used. As of January
1990, there were 100,000 registered users of Mace Utilities and as most people would know, Norton‘s Utilities has become probably one of the most popular PC utility suites available. Due to the lack of specific forensic software, personnel from IRS including
Michael R. Anderson, Andrew Fried and Dan Mares, and Stephen Choy from Revenue
Canada, later developed their own suites of MS DOS–based (Microsoft Disk Operating
System–based) forensic utilities, many of which have been refined and updated, and persist in use to this day.
Initially, the only method available to the forensic examiner to preserve evidence was to take a logical backup of files from the evidence disk to magnetic tape, hopefully preserving appropriate file attributes, restore these files to another disk and then examine them manually using command line file management software, such as Executive
Systems Inc., Xtree Gold, The Norton Commander, and appropriate file viewing software.
Many early mainframe and minicomputer backup packages used the ‗‗sector imaging‘‘ method. By the mid-to-late 1980s, however, the image backup had been replaced by logical backup, which copied the file and directory structure of a disk to the backup media that allowed the user or administrator to selectively backup and restore files from the system. This was a leap forward as far as the user was concerned, but was less useful from an evidentiary perspective.
In the United States, the requirement for forensically sound bit stream image duplication of hard drives was identified by a small, informal group of like-minded U.S. federal, state and local computer forensic practitioners way back in late 1989 during the development of the first computer forensic science training courses at the Federal Law
Enforcement Training Center (FLETC). The first specific forensic program created to

Nehru College Of Engineering And Research centre

Computer Forensics

5

perform this task was named IMDUMP, developed by Michael White, who was employed by Paul Mace Software at that time.
Forensic imaging requirements in the United Kingdom developed during research work on computer viruses in the mid-to-late 1980s. Bit stream cloning of hard drives infected by viruses allowed the exact effect of the virus to be examined through the actual execution of the virus code. These requirements led to the development of the original
Disk Image Backup System (DIBSTM), a forensic hardware and software solution using a parallel port connected magneto-optical drive (MOD), which was first sold commercially in 1991
Law enforcement in Australia, which had always had a close working relationship with the U.S. and Canadian law enforcement, heard of and acquired forensic tools including Safeback, Mares and Fried Utilities, and similar tools. Rod McKemmish, one of the coauthors of this book, has the distinction of being the primary developer of forensic tools in Australia and his Fixed Disk Image (FDI) software provided functionality almost identical to Safeback but free to Australian law enforcement.
Besides the lack of specific forensic tools, the second major deficiency was the lack of specific training for computer search, seizure, and forensic analysis. The same people in the United States and Canada who identified the deficiencies with respect to tools also began to identify the training requirements. Michael R. Anderson, then a special agent with the IRS-CID, developed the Seized Computer Evidence Recovery
Specialist (SCERS) curriculum for the FLETC and were a cofounder of the International
Association of Computer Investigative Specialists (IACIS1).
IACIS1, the oldest and probably best known computer forensic organization in the world, was formed in 1990 in Portland, Oregon to provide training and certification for law enforcement computer forensic examiners. IACIS training was the forum in which the United States, Canadian, Australian, and many other countries computer forensic specialists, first became acquainted with the principles, techniques, and tools of computer forensics, many of which are still valid to this data.

Nehru College Of Engineering And Research centre

Computer Forensics

6

4. COMPUTER FORENSIC PROCESS
As in any investigation, establishing that an incident has occurred is the first key step. Secondly, the incident needs to be evaluated to determine if computer forensics may be required. Generally, if the computer incident resulted in a loss of time or money, or the destruction or compromise of information, it will require the application of computer forensic investigative techniques. When applied, the preservation of evidence is the first rule in the process. Failure to preserve evidence in its original state could jeopardize the entire investigation. Knowledge of how the crime was initiated and committed may be lost for good. Assignment of responsibility may not be possible if evidence is not meticulously and diligently preserved. The level of training and expertise required to execute a forensics task will largely depend on the level of evidence required in the case.
If the result of the investigation were limited to administrative actions against an employee, the requirement would be lower than taking the case to court for civil or criminal litigation.
There are five basic steps to the computer forensics
1. Preparation (of the investigator, not the data)
2. Collection (the data)
3. Examination
4. Analysis
5. Reporting
The investigator must be properly trained to perform the specific kind of investigation that is at hand. Tools that are used to generate reports for court should be validated. There are many tools to be used in the process. One should determine the proper tool to be used based on the case.

Nehru College Of Engineering And Research centre

Computer Forensics

7

5. WHO CAN USE COMPUTER FORENSIC EVIDENCE?
Many types of criminal and civil proceedings can and do make use of evidence revealed by computer forensics specialists:
Criminal Prosecutors use computer evidence in a variety of crimes where incriminating documents can be found: homicides, financial fraud, drug and embezzlement recordkeeping, and child pornography.
Civil litigations can readily make use of personal and business records found on computer systems that bear on: fraud, divorce, discrimination, and harassment cases.
Insurance Companies may be able to mitigate costs by using discovered computer evidence of possible fraud in accident, arson, and workman's compensation cases.
Corporations often hire computer forensics specialists to ascertain evidence relating to: sexual harassment, embezzlement, theft or misappropriation of trade secrets and other internal/confidential information.
Law Enforcement Officials frequently require assistance in pre-search warrant preparations and post-seizure handling of the computer equipment. Individuals sometimes hire computer forensics specialists in support of possible claims of: wrongful termination, sexual harassment, or age discrimination.

Nehru College Of Engineering And Research centre

Computer Forensics

8

6. FORENSICS MODEL FOR LAW ENFORCEMENT
It is useful to draw the evidentiary requirements, legal considerations, and principles together into a framework or model that provides coherency and consistency for all aspects of conducting computer forensics.
In developing such a framework, it is important to focus on the challenges that may be presented to the examiner in applying the model to carry out examinations and in the presentation of the resulting evidence in such a way that it is subsequently accepted in court. 1. Expertise test: Obviously a key test will be to challenge the expertise and credibility of the computer forensic examiner who conducts the forensic analysis and presents the resulting evidence. This test essentially seeks to establish the strength and reliability of the expert‘s knowledge as applied to the IT environment in which the electronic evidence is extracted.
2. Methodology test: The methodology test probes the processes and procedures adopted by the computer forensic examiner during the computer forensic examination. The adoption of poorly constructed methodologies can lead to erroneous analysis results, and may even lead to the destruction of, or alterations to, potential evidentiary data.
3. Technology test: The technology test examines the technology used during the forensic examination process, and aims to test the accuracy, reliability, and relevance of the technology as applied in the computer forensic analysis.

6.1 Computer forensic—secure, analyze, present (CFSAP) model
The CFSAP (computer forensic—secure, analyze, present) model essentially combines the four key elements of computer forensics (identification, preservation, analysis and presentation) into three distinct steps. Each step combines a number of processes to achieve three key objectives:
Nehru College Of Engineering And Research centre

Computer Forensics

9

1. The securing of potential evidence;
2. The analysis of secured data;
3. The presentation of the analysis results.
The CFSAP model (Figure 3.1) provides a framework within which detailed individual forensic processes and procedures may be developed. It is of a sufficiently high level that it can be used to develop procedures for any of the different types of computer forensics as detailed throughout this book.

6.1.1 Secure—securing potential evidence
The securing of evidence encompasses both the identification of potential sources of evidence as well as the preservation of data residing within each source. The development of a suitable methodology to secure electronic evidence will be dependent upon the rules of evidence and the technology available at the time. The primary focus of this stage is to ensure that all available evidence is identified and captured in such a way that its integrity and value is not diminished.

Figure (1): The CFSAP model

Nehru College Of Engineering And Research centre

Computer Forensics

10

Identification The identification of data requires a comprehensive understanding of both the nature of the IT environment as well as the underlying technology. Failure to understand both of these can result in the key evidence being missed. Once potential evidence is located, and before it is preserved, the forensic examiner must ensure that it is relevant to the facts under investigation. Depending on the circumstances and grounds on which the evidence is being acquired, failure to determine relevance could see it ruled inadmissible in any future legal examination.
Preservation

Once potential evidence has been identified it will be necessary to either

preserve the original data in the state in which it is found or to make an exact duplicate of the data. Essentially computer forensic rule 1 (minimal handling of the original) and rule
3 (comply with the rules of evidence) are critical in the securing stage. The preservation of data under these circumstances involves two distinct steps:
1. Duplication;
2. Authentication.
While it is preferred that the original source of evidence be preserved, in reality this may not be possible. Electronic evidence may reside on a computer system that is critical to the ongoing operations of a business, or alternatively it may reside on a computer geographically removed, yet remotely accessible. In either case, securing of the original is not realistic. In such instances, it is desirable to duplicate the data by making an exact copy through the use of forensically sound duplication techniques. Similarly, where data of evidentiary value is being collected in real time, as in the case of live monitoring of system logs during unauthorized network activity, it would be unrealistic to take the receiving system off-line for the purposes of preserving data captured.
Interestingly in some instances, such as some criminal investigations, retention of the original data by the systems owner may constitute a continuation of an offence, thereby necessitating the seizure of the original computer system(s).
After duplicating the data it is necessary to authenticate the copy by applying some means of comparison with the original. This is particularly a problem if the original

Nehru College Of Engineering And Research centre

Computer Forensics

11

data is resident on a live system that is constantly subjected to change. This raises the question, why would you need to authenticate a copy sometime after the duplication process has occurred? The simple answer is that in some instances, it could be alleged that the copy has been altered, either deliberately or inadvertently, and as such is not reliable. The best way to authenticate data is to fingerprint the files by generating a
OWHF, of both the original and copy data at the time of duplication. If the duplication process is accurate, the fingerprints should match up. Additionally, if it is alleged that the data has been tampered with, the retaking of a mathematical fingerprint from the copy data should yield the same result as that derived at the time of duplication.
6.1.2 Analyzing data
The analysis of potential digital evidence essentially encompasses three steps:
1. The preparation of data;
2. The processing of extracted data;
3. The interpretation of data.

Preparation This is the preparatory process in which captured data is made ready for processing. Whether the original data is seized, or an authenticated copy of the original is obtained, it is essential that the forensic examiner possess a master copy of the data to be examined. The master copy is simply an authenticated copy of the original that is preserved for future reference. To alleviate possible changes, it is not uncommon for the master copy to be stored on some form of permanent storage media (e.g., CDROMorDVD).The master copy forms the benchmark upon which the forensic process may proceed. To this end, it is regarded as standard practice to work from a secondary copy of the master copy. If during the examination process changes to the data occur, or some form of research and development on the data is required to overcome a problem, the computer forensic examiner still has, by way of the master copy, an authenticated duplicate to recommence the examination.

Nehru College Of Engineering And Research centre

Computer Forensics

12

Processing The processing of data essentially encapsulates the application of computer technology, in the form of data recovery and analysis tools, to the retrieval of relevant electronic evidence. Simply put, it is the finding of the proverbial needle in the haystack.
The processing of data entails two key steps:
1. The search for relevant data;
2. The extraction of relevant data.
The search for relevant data involves scanning through all preserved data, searching for information that matches a predetermined criterion. The predetermined criteria can encompass things such as key words, recorded events or activities, system changes or anomalies, or disguised or encrypted data. In searching for relevant data, the forensic examiner will not only examine current files, but also consider searching for deleted material or residual data. Additionally, the computer forensic examiner may apply various pattern matching or data analysis techniques in an effort to identify relationships between data that may afford valuable evidence of an event or course of conduct. The extraction of data can only take place when relevant data has been located.
The extraction process simply involves the isolation and duplication of the relevant items of data from the copy undergoing examination. These extracted copies form the basis of the electronic evidence for the particular matter under investigation.
Interpretation: The interpretation stage relies heavily on the knowledge and skill of the computer forensic examiner, rather than the capabilities of the forensic technology as relied upon in the processing step. Once the computer forensic examiner has isolated electronic evidence, he/she must be able to interpret it to establish its meaning and, therefore, its bearing within any investigation or inquiry. The interpretation of data is undertaken to establish key issues, such as relevance, context, ownership, and identity
(these are discussed in more detail later on in the book). It is in the interpretation stage that the computer forensic examiner may express an opinion or belief regarding things such as the following:

Nehru College Of Engineering And Research centre

Computer Forensics

13

How the data came to be on the computer system?
The accuracy and reliability of the data.
The possible identity of the owner.
The purpose of the data.

In expressing an opinion that may be used in subsequent legal proceedings, the computer forensic examiner must possess sufficient knowledge regarding the IT environment from which the data is derived to satisfy the expertise test.
6.1.3 Presentation of results

The presentation of the results of a computer forensic examination is the final step in the computer forensic process. It is at this point that all relevant data should have been identified, preserved, and extracted. In presenting the results of an examination, it is critical for the computer forensic examiner to be able to clearly and concisely convey both the results obtained and the meaning of those results. To this end, it is essential that the computer forensic examiner be able to explain complex technological concepts and
Techniques in easy-to-understand terms.
This is important given that in some instances the results of the computer forensic examination may end up being tendered in evidence before a court of law. Consequently, the computer forensic examiner must be able to convey the significance of any results to persons who may have little or no understanding of the technology employed.
To assist the computer forensic examiner in the presentation stage, it may be necessary to employ various visualization tools, such as flow charts and link analysis charts, in an effort to explain underlying concepts and relationships. While such visualization techniques may assist, it should be remembered that they are merely an aid to, and not a substitute for the actual evidence.

Nehru College Of Engineering And Research centre

Computer Forensics

14

In presenting the results, the computer forensic examiner is faced with the possibility of being challenged on his/her findings based on the following:
The tools used;
The methodology employed;
The examiner‘s expertise.

A failure to satisfy any challenge can result in the electronic evidence being regarded with suspicion, and may ultimately result in the computer forensic examiner‘s credibility being challenged.

7. FORENSIC TOOLS
The Computer Forensic Investigative Toolkit (CFIT1) software, described provides facilities for analysis of data streams (such as disk drives, network data, disks, and telecommunications call records), the ability to add and integrate a variety of specialized interactive forensic tools into a common, easy-to-use visual framework, and the ability to capture the history of an investigation in a simple visual manner. It is the last which is perhaps the most noteworthy development of recent times in the functionality of some of the commonly used tools namely, the integration of various aspects of a forensic investigation into a case-based portfolio. We can, in general, identify three categories of forensic functionality: imaging, analysis, and visualization. These categories can naturally be subdivided further and as noted earlier an increasing number of tools integrate these functionalities within one toolkit or workbench:
1. Imaging:
b. Imaging volatile memory (including on PDAs and mobile
Phones);
c. Disk and file imaging;
d. write blockers;
e. Integrity code generators and checkers.

Nehru College Of Engineering And Research centre

Computer Forensics

15

2. Analysis:
b. Ambient data recovery and the searching of raw disk data for
a. text strings, by sector (typically including unused areas);
c. Data and file recovery;
d. Disk and file system integrity checking tools;
e. File conversion (i.e., conversion of proprietary files into text files or vice versa, or between proprietary formats, to facilitate further processing);
f. Data filtering by date last modified and other file properties such as file or application type such as e-mail, graphics, word processing, spreadsheets, or presentation files;
g. Search tools, sophisticated search engines with fuzzy logic capability;
h. Data mining tools.
3. Visualization:
a. Time-lining;
b. Link analysis tools.

SC Magazine has recently reviewed a number of forensic tools, first in September 2000:
Byte

Back

(Tech

Assist

Inc.),

Drive

Image

Pro

3.0

(PowerQuest

Corporation), EnCase 2.08 (Guidance Software Inc.), Linux dd 6.1(Red Hat Inc.),
Norton Ghost 2000 Personal Edition (Symantec Corporation), SafeBack 2.0 (New
Technologies Inc.), SnapBackDatArrest 4.12 (Columbia Data Products Inc.); and then in April 2001:
Byte Back (Tech Assist Inc.), DriveSpy (Digital Intelligence Inc.),EnCase
(Guidance

Software

Inc.),

Forensic

Toolkit

(AccessData

Corporation),

MaresWare Suite (Mares and Company).
Automated Computer Examination System (ACES) is worthy of special mention. It has been developed by the FBI to provide a forensic tool for LE, which inter alia supports the identification of known files (e.g., executables), and thus their exclusion from further

Nehru College Of Engineering And Research centre

Computer Forensics

16

investigation and considerably facilitating the work of the investigator. Common estimates are that a typical personal computer or workstation will contain of the order of tens of thousands of standard files, which can safely be excluded from analysis.
Further development of ACES functionality appears to have been subsumed into the
NIST projects referred later. Another tool used by U.S. government agencies, including the Department of Treasury and the IRS, and also by the Australian Federal Police, is
ILook Investigator examined in detail later. FBI CART has recently announced that it has suspended further training in and development of ACES in favor of ILook Investigator. A number of tools including both ILook Investigator and EnCase support the import and use of hash sets from the hashkeeper database of the U.S. DOJ National Drug Intelligence
Center . Another highly regarded tool with a built-in Known File Filter (KFF) capability is Forensic Toolkit from AccessData Corporation , which also provides Password
Recovery Toolkit, one of the leading password recovery packages. A recent survey of
151 U.S. LE agencies and other federal organizations (including the FBI, OIG/NASA,
NIPC) found that some 69% of investigations use Encase, 55% use Safeback, and 27% use ILook. Noteworthy was that 41% of the agencies/organizations surveyed were dissatisfied with the tools at their disposal.
We present an overview of three different forensic tools currently available:
EnCase ILook, and CFIT.

7.1 EnCase
EnCase (Encase3, June 2001), distributed by Guidance Software , is a computer forensics software product used by many LE and information security professionals.
Since its ongoing development from 1998, it is one of the few fully integrated Microsoft
Windows–based products for forensics investigations. EnCase is a direct descendant of the Expert Witness software previously distributed by ASR Data (pre-1998) and early versions of EnCase were very similar to Expert Witness.
The EnCase integrated environment means that the EnCase software acquires the evidence as a verifiable, proprietary bit-stream image (called an Evidence File, EF),

Nehru College Of Engineering And Research centre

Computer Forensics

17

mounts the image EF as a read-only virtual drive, and reconstructs the file system structure utilizing the logical data in the image.This integrated procedure eliminates the time-consuming sequence of steps normally associated with traditional command-linebased imaging and ensures all the evidence and meta-evidence (such as timestamps) remains forensically unaltered. The acquired EF is available as a loss-less compressed image, and includes cyclic redundancy checks and a MD5 hash value to ensure data integrity. EnCase can image different forms of media, such as SCSI/IDE drives and
Zip/Jaz drives as well as RAID disk sets. The investigator can also bypass the acquisition of an EF by prescanning an evidence drive using a parallel port or 10-BaseT network cable between the investigator‘s computer and the target computer and invoking the remote preview feature. This makes it easy for an investigator to quickly undertake a perfunctory forensic analysis of the drive without incurring the overheads of an EF creation. Previewing is useful when a preliminary look at the evidence storage media is warranted by time constraints, such as during onsite inspections. Unfortunately, in the review preview mode, the investigator is unable to save any of his/her findings, such as search results as all of these will be lost once the computers are disconnected.
Once the EF has been created, the investigator can then apply one of the several integrated multitasked tools within a common graphical user interface to analyze the file system. File systems, such as Microsoft Windows FAT and NTFS and UNIX can be reconstructed. The user interface displays several Encase Views such as the Case View,
Bookmarks View, and Keywords View together with associate supporting views, such as
Table View, Gallery View, Timeline View, and the Case Report.
The Case View displays all the evidence included in a case for analysis in a convenient tree of folder structure as found in a Microsoft Windows Explorer view. It can also display recovered folders for an EF folder, that is, subfolders and files found in the unallocated disk clusters that have been overwritten, as well as perform a signature analysis of every file in the EF. Signature analysis is useful for (1) identifying any discrepancies between a file‘s extension and the file‘s header, and (2) building hash sets for file filtering. Hash sets are used in the context of search operations for eliminating well-known files, such as operating systems files, or for including selected files and
Nehru College Of Engineering And Research centre

Computer Forensics

18

bringing these to the investigator‘s attention, such as porn files or noncompliant software.
E-mail attachments are ripped, zip files are automatically unzipped and compound documents (such as Microsoft Word documents) can be recovered. An associate Table
View displays all of the subtree of folders and files of a Case View tree node, together with the file type (e.g., deleted file, unallocated space, deleted, and overwritten file) and a set of attributes (e.g., file name, file extension, and file timestamps) that can be sorted.
File contents can always be viewed in text or hex format in the bottom pane of the
EnCase GUI. The associated file clusters are displayed in the Disk Surface View, together with their disk geometry location values.
The Keywords View enables the investigator to build a set of search terms that can be placed in a set of keyword folders (a keyword folder is a type of user dictionary).
Keyword search can be case sensitive, grep-based (i.e., regular expression), or Unicode.
Images can also be searched and displayed in a thumbnail picture viewer called the
Gallery View.
The Bookmark View displays the bookmarks, such as EF, text fragments and images, that the investigator has previously bookmarked. The Bookmark View also displays keyword search results as the search hits automatically become bookmarks.
Bookmarks are a convenient way of identifying, for example, particular clues and files and writing comments in each bookmark entry. Selected bookmarks can then be incorporated in the case report.
An useful feature of EnCase is the inclusion of a scripting language, Escript
Macro Language, which allows the more adventurous investigator to construct his/her own custom forensic tools and filters for execution within EnCase. This requires some knowledge of object-oriented programming, as it is based on the C++ language paradigm.

EnCase is a comprehensive (based on file system and media type) and integrated forensic tool that allows investigators to do some useful and basic forensic analyses. Its user interface is simple and easy-to-use and provides some useful functionality (such as the instant decoding of nontext data for meaningful interpretation and integrated

Nehru College Of Engineering And Research centre

Computer Forensics

19

reporting). The interface, however, could potentially become more cluttered as more forensic tools are included in future.

7.2 ILook Investigator
The ILook Investigator, or simply ILook, forensic software is developed and owned by
Elliot Spencer and the Criminal Investigation Division of the United States Internal
Revenue Service (U.S. IRS), U.S. Treasury Department. It can be downloaded from their
Web site http://www.ilookforensics.org), though it is only available to ‗‗LE personnel, forensic personnel working for LE agencies with a statutory role, national security, and military police agency staff.‘‘ To use the software, a password is needed that can be obtained by registering with the authority. It is claimed on the Web site to be used by
‗‗thousands of LE labs and investigators around the world‘‘ [57], including some
Australian agencies, and has been adopted by the U.S. IRS and FBI as a forensic analysis platform. ILook is designed to allow an investigator to access the partition file system(s) imaged during the evidence gathering process and undertake an extensive forensic analysis. Currently ILook (version 7, July 2002) is only supported if installed on a
Microsoft1 Windows1 NT or Microsoft Windows 2000 or XP operating system.
Investigators using ILook need a relatively high degree of technical knowledge to drive it effectively. ILook can be used to image any attached media device, however, it relies on an alternative write blocking mechanism. In addition to its own imaging tool, ILook can identify and reconstruct raw bit stream images, ISO, and CIF CD images, VMware virtual disks as well as image files generated by other forensic software (e.g., Encase and
Safeback). Investigators can investigate the image map by traversing the image to examine the partition structuresand can probe the image for specific meta-structures such as boot records and partition tables that could be used to recover a (broken) file system.
ILook can reconstruct Microsoft‘s FAT, VFAT, NTFS, Macintosh‘s HFS and HFS+,
Linux‘s Ext2FS and Ext3FS, Novell‘s NWFS, and CDFS file systems.
The ILook software (see Figure 1) provides a Microsoft Windows–like Explorer interface that consists of various window frames; for example:
Nehru College Of Engineering And Research centre

Computer Forensics

20

Figure (2). ILook Investigator.

1. An EvidenceWindow frame allows the investigator to view and navigate the partitions and file system structure of a suspect disk. It also displays a set of additional Virtual Folders that contain pointers to undeleted file streams, files that have been eliminated from the investigation, files or unallocated sectors that have been tagged for a specific purpose, files identified in previous searches, and files with user-defined specifications or categories (e.g., deconstructed files).
2.

A FileWindow frame lists all the files and file properties stored in the selected folders in the EvidenceWindow.

3.

An InfoWindow frame gives the investigator access to groups of information related to the objects selected in the EvidenceWindow and FileWindow frames.
These groups of information are arranged as a set of tab window panes and include Nehru College Of Engineering And Research centre

Computer Forensics

21

a. A Disk View pane that displays the disk partition layout together with a
Norton-like two-dimensional partition cluster map and cluster content;
b. A File View tab pane that displays file contents in their intended manner or in raw text and/or hex view;
c. A tab pane displaying the audit log of an investigator‘s activity during a session (e.g., date/time and actions undertaken);
d. A search results window tab pane;
e. A tab pane incorporating an editor and execution engine for processing data in specific ways and for undertaking repetitive tasks using a BASIClike scripting language;
f. A tab pane for displaying thumbnail graphical images.

Investigators can undertake string term searches with the help of one of three search engines—a standard search engine for a small number of search terms (with
Boolean combinations), a bulk search engine for a large number (up to 1,000 search terms stored in a file) of simultaneous searches, and an indexed search engine for fast repetitive searches (requiring the investigator to generate an index of the case data prior to invoking any indexed searches). Searches can be undertaken on all the data associated with a case (e.g., files, slack, and free space), as well as compressed archive files and file signatures (file magic numbers stored in the first few bytes of a file). Magic numbers can also be used for salvaging (or ‗‗carving‘‘ in ILook parlance) files in free space (e.g., deleted files that cannot be undeleted using a Norton-like recovery method). ILook also allows the investigator to search for files based on specific attributes (e.g., name, date, and MACtimes). Date/time-based searches use a basic calendar as the basis for date/timestamp selection and viewing. A simple frequency analysis of the file MACtimes is also displayed with the calendar representation. (NB: ILook allows the investigator to manipulate date/times on a partition-wide basis.) An interesting search facility is the search bot, an autonomous search engine that runs in the background thereby enabling the investigator to continue his/her examination at the same time as the search is being performed. Nehru College Of Engineering And Research centre

Computer Forensics

22

As indicated in the introduction to this section, an useful file filtering facility offered by ILook is based on the hash analysis of file content. Each file can be identified by a unique message digest (one-way hash), which is used to either include the file, or exclude it, from the investigation by performing a match of the file content hash. Hashes can be generated in two ways namely, internally from files selected by the investigator, or from known files such as operating systems files. In the first case, the investigator can generate and export his or her own hash data set using standard CRC32,MD5, and SHAN
(N¼1 or 2) formats, depending on the level of the false positive rates required.
Specifying a small number of false positives necessitates the use of hash algorithms with larger message digest sizes (such as SHA). In the second case, ILook will perform the match hash analysis of known files (including cryptography and steganography programs.). These hash sets are available as standard hashing toolsets, such as U.S. DOJ
NDIC‘s Hashkeeper ; see also the NIST, NSRL‘S Reference Data Set , and Chapter 3 for some further discussion of these databases. The investigator can choose to perform either positive hash analysis (searching for files that do match) or negative hash analysis
(eliminating matching files, thereby reducing the number of files that require further investigation). File deconstruction, that is, the interpretation of a limited set of compound file formats (such as Microsoft Outlook Express files, netscape cache files, AOL mailboxes), and extraction of data therein can also be performed using the ILook forensic software.
Once data extraction has been undertaken by the ILook ‗‗deconstruction engine,‘‘ the contents of the extracted data structures can then be investigated. Content analysis can
Subsequently be undertaken using one of the three search engines mentioned earlier.

7.3 CFIT1
The CFIT1 is an integrated computer forensics tool developed by the DSTO, Department of Defence, Australia . CFIT1 provides efficient and flexible automated forensic methods for analyzing the content of data streams such as disk drives, network data, disks, and telecommunications call-data, thereby enabling investigators to discard data that are

Nehru College Of Engineering And Research centre

Computer Forensics

23

peripheral to their investigation. CFIT1 provides a forensic problem-solving environment that integrates tools in a visual framework for investigating the unauthorized use of computer and network facilities. The main advantages of CFIT1 are the ability to integrate multiple interactive forensic tools into a common, easy-to-use visual framework; the facility for adding new specialized forensic tools to the framework; and the ability to capture the history of an investigation in a simple visual manner.
The basic investigative environment in CFIT1 is the case, in which investigators can work individually or as a team to solve one or more criminal cases. Networked multiple investigators can investigate a case at the same time using CFIT1. The CFIT1 platform includes case management, forensic data stream access and manipulation, data visualization, and forensic processing. CFIT1 incorporates a two-dimensional visual language environment, called Picasso, for graphically expressing a forensic case on a visual framework or workbench. Forensic tools that analyze the case data can be dragged and dropped onto the workbench, interconnected, and executed. Investigators use the interactive visual workbench to undertake an investigation and share their results with other investigators working on the same case.
Forensic tools included in CFIT1 include a hard disk analyzer, file system analyzer (currently ext2 and FAT), log extractor, ontological search engine, unallocated space extractor, time event resolver, and time-lining tool. Investigators can interconnect these tools using flows within Picasso, though the interconnections are not always universal since some tools cannot interconnect with other tools due to semantically incompatible data types. CFIT1 also ensures the consistency in the interpretation of time differences from computers running different operating systems (which may interpret time in different ways), located in different countries and possibly covering multiple time zones. It does this by associating each piece of case evidence, or metadata generated by forensic tools, with a time reference defined by the investigator. These time references are then automatically mapped into the common UTC timeframe.
An example computer forensic tool available in CFIT1 is the Ferret Discovery
Engine—a tool for textual concept ontology generation, navigation, and searching. It can

Nehru College Of Engineering And Research centre

Computer Forensics

24

be used for searching files or documents for particular concepts and identifying those documents that might have a forensic significance. It is particularly useful for searching text-based files, though it can also be used for searching text in nonprintable files such as binaries (executable files) and even network packets. Ferret allows the investigator to
Discover suspicious text byte streams, such as files/documents from one or more file systems;
Establish the inherent relationships between the streams based on a set of concepts. Ontology is a domain of discourse where one or more keywords (or terms in the Ferret terminology) is organized as a domain-specific graph based concept structure that best describes knowledge or information about a given domain. A concept is a set of one or more terms and their sets of relationships with other concepts (we define relationship later). In Ferret, a concept is initially restricted to containing a single term (e.g., money, transfer, and, account) and its set of relationships with other concepts.
The most basic function available in Ferret is to perform searches for a set of terms on some input data streams defined by upstream forensic tools in CFIT1. This can be most useful when searching unallocated space and hidden space on disks.
Investigators can select a set of search options (such as stop terms, allowable errors in each term, and case-sensitivity) as well as being able to subselect data streams and run multiple concurrent searches. Figure 2. shows the results of a search operation for a single term on eight input data streams (in this case, the streams are Linux log files). The terms kernel and apollo have been found in four of the data streams and can be viewed in the messages log file in the lower panel.
The term nodes in the graph-based concept structure are usually related to each other by one or more relationships (represented as the arcs of the graph), such as generality, specificity, synonymy, and meronomy. Semantic relationships may also arise in the context of the text language model employed. The language model captures and characterizes the regularities in the natural language used in the text stream. For example,

Nehru College Of Engineering And Research centre

Computer Forensics

25

short- and long-distance textual information such as N-grams and triggers describe the underlying associative relationships used in a text document. An N-gram is a sequence of contiguous words in a text stream with its significance

Figure (3): Search results using Ferret

in that text stream defined by the conditional probability of one word given the preceding sequence of words. Consequently, N-grams capture well short-term and local dependencies in the text stream. N-grams can, unfortunately, capture nonsensical text frames that are unrelated to their linguistic role. A trigger is a pair of terms that cooccur, usually within a fixed word window size, in the text stream. Triggers are effectively longdistance bi-grams (2-grams) capable of extracting relationships from a large-window document history . Triggers have been shown to be effective in capturing semantic information over small-to-medium text stream window sizes (distances up to 5). Ferret uses triggers to extract semantic relationships within text documents.
The ability to describe semantic relationships using a concept graph allows the investigator to visualize the concept domain of the case under investigation much more succinctly. The graph combines both language semantic relationships as well as datadriven semantic relationships (e.g., triggers). This allows the investigator to navigate the concept domain and possibly discover new relationships. Figure 3 displays the concept

Nehru College Of Engineering And Research centre

Computer Forensics

26

Figure (4): Concept graph using Ferret.

Within the Ferret concept browser window, showing the different concepts derived from the input data streams, as selected in Figure 2, related to the central concept apollo. Some of these concepts are semantically related by generality and specificity (e.g., agency, and supernatural), and to others derived from data stream
In summary, CFIT1 is an easy-to-use forensic investigation environment that provides the ability to integrate multiple interactive forensic tools into a common, visual framework. It is an ongoing development and requires the addition of more file system support (e.g., NTFS is currently being included) and the inclusion of an improved reporting facility.

Nehru College Of Engineering And Research centre

Computer Forensics

27

8. CONCLUSION
Practical investigations tend to rely on multiple streams of evidence which corroborate each other - each stream may have its weaknesses, but taken together may point to a single conclusion.
Disk forensics may remain for some time the single most important form of digital evidence .Increasing number of computer crime means increasing demand for computer forensics services. In doing computer forensics investigation, choosing the right disk imaging tool is very important. There is no standard conformity of computer forensic imaging methodology or tool. This paper only provides guidance and suggestions regarding imaging tool. It should not be constructed as mandatory requirement.
Today, everyone is exposed to potential attacks and has a responsibility to its network neighbors to minimize their own vulnerabilities in an effort to provide a more secure and stable network. As the enormity of the problem unfolds, we will better comprehend how vital it is to work towards dramatic changes in research, prevention, detection and reporting, and computer crime investigation. Security can no longer be thought of as an impediment to accomplishing the mission, but rather a basic requirement that is properly resourced. Our focus has been to implement the newest and most advanced technology, but little has prepared us for the gaping security holes we‘ve neglected to mend along the way. From the ranks of management to every employee that works behind each terminal, the policies that protect and mitigate risks must be current, understood, and aggressively enforced.
Reporting must be standard operating procedure so that everyone can realize the total impact and define what is required for a secure cyber environment. The responsibility belongs to everyone and it is with that effort we will be able to harness the security of this new technological age. An enormous challenge lies before us and we must attack it with the same enthusiasm and determination that brought us to this new frontier.

Nehru College Of Engineering And Research centre

Computer Forensics

28

9. REFERENCE
Text book-

George Mohay, Alison Anderson, Byron Collie, Olivier de Vel, Rodney McKemmish,

―Computer and Intrusion Forensics‖, Addison Wiley, 2001

Websites –

1)

www.computerforensicsworld.com

2)

www.computer-forensic.com

3)

www.cio.com/article/30022/Computer_Forensics_IT_Autopsy

Nehru College Of Engineering And Research centre

You May Also Find These Documents Helpful

  • Good Essays

    transition to a publicly traded company. However, the IPO also brought along a major class action lawsuit…

    • 849 Words
    • 10 Pages
    Good Essays
  • Better Essays

    Fin 516 Mini Case

    • 1678 Words
    • 7 Pages

    Another potential operating risk for Google Inc. is destruction of company property. Being in the…

    • 1678 Words
    • 7 Pages
    Better Essays
  • Satisfactory Essays

    LAW 421 Week 5

    • 453 Words
    • 2 Pages

    Legal issues such as those listed can and does feasibly damage consumer, employee and shareholder relations with the company. This can also prevent suppliers from providing the company services and goods because the status and perceived character the company will have because of these issues. Proceeds hurt from this type of harm to these delicate associations and causes harm the integrity of the business.…

    • 453 Words
    • 2 Pages
    Satisfactory Essays
  • Powerful Essays

    The NYSEG Corporate responsibility program. Text, page 120. Your study should answer questions 1-3 on page 122.…

    • 1146 Words
    • 3 Pages
    Powerful Essays
  • Better Essays

    The internet bubble that burst in March, 2000 is followed with much larger and more devastating collapse: Telecom. WorldCom’s financial statements were far worse than expectation that would result in stock price fall, downgrading company and most importantly—losing capital to acquire companies. Then CEO and CFO were planning to change the financial statements with mid-level accountants. They thought if the financial statements were better in next quarter, they could cover the change. But things didn’t go according to plan. They had to change the number until the whistle blew.…

    • 1104 Words
    • 5 Pages
    Better Essays
  • Satisfactory Essays

    Different traits can cause different relationships between people. For example, friendly traits can assist in forming lifelong relationships, in contrast cruel traits can create hate between individuals. James Hurst applies character traits between the two brothers, Doodle and the narrator. The journey of the two brothers is short but it includes the relationships appeared from the character traits. In the story, “The Scarlet Ibis” James Hurst uses aggressiveness, selfishness, and determination to inform the readers the attitude shown towards Doodle by the narrator.…

    • 83 Words
    • 1 Page
    Satisfactory Essays
  • Good Essays

    Anasazi Ppaer

    • 646 Words
    • 3 Pages

    • What do you see in today's local church that is similar or different from the early church?…

    • 646 Words
    • 3 Pages
    Good Essays
  • Satisfactory Essays

    competition in the industry. The risk is, for how long they can leverage on their existing…

    • 265 Words
    • 2 Pages
    Satisfactory Essays
  • Good Essays

    Long term, these caused companies to loose lots of money from over seas…

    • 914 Words
    • 4 Pages
    Good Essays
  • Satisfactory Essays

    Winco Case Write-Up

    • 462 Words
    • 2 Pages

    Some risks to note are the labor disputes (which resulted in net loss in 1963), declining net sales in 1964, and possible competition. These are partially offset by strong management that will continue to operate the company.…

    • 462 Words
    • 2 Pages
    Satisfactory Essays
  • Good Essays

    Other companies may have closed the technology gap, or at least clouded the issue and that should concern them.…

    • 703 Words
    • 3 Pages
    Good Essays
  • Satisfactory Essays

    MGC1

    • 672 Words
    • 3 Pages

    lack of patent protection, weak brand name, poor reputation, high cost structure, lack of access to resources, lack of distribution access.…

    • 672 Words
    • 3 Pages
    Satisfactory Essays
  • Satisfactory Essays

    Falling Dominoes

    • 348 Words
    • 2 Pages

    to loss of sale, to loss of jobs and everything has to be built up over again, from the…

    • 348 Words
    • 2 Pages
    Satisfactory Essays
  • Good Essays

    For example, one incident on United Airlines last week has caused them to lose 100’s of millions of dollars in stocks and airfare. In addition to this, they now have a lot of work to do to regain the confidence of their potential customers and stock holders. Sometimes it only takes one incident to lose confidence and trust.…

    • 2080 Words
    • 9 Pages
    Good Essays
  • Good Essays

    Cyber-Loafing

    • 774 Words
    • 4 Pages

    This has become a major problem in fact companies report a major loss in productivity and costs…

    • 774 Words
    • 4 Pages
    Good Essays