Tool marks evaluation arose out of the necessity to evaluate marks made at a crime scene without an academic basis. Tool mark identification lacks a scientific foundation. Examiners cannot determine the uniqueness of tool marks based on the comparison method because the lack of quantifiable data. Literature has explained that the NAS report has critiqued the reliability of tool mark examination in court. The subjectivity of the examiner is evident during analysis, which discredits the use of pattern evidence. These subjective observations lead to errors when determining uniqueness. The judicial system must discredit the validity of tool mark examinations until a quantifiable method is established. Researchers need to apply new research to proliferate the acceptance of pattern evidence. …show more content…
Introduction
Pattern evidence consists of indentations or impressions created by objects when they encounter a surface. One particular type of pattern evidence is tool marks. A crime scene examiner can locate tool marks made by certain tools at a crime scene. Tool mark examiners then use ACE-V to identify the impression made at the scene. ACE-V consists of analyzing the mark, comparing the unknown mark to the mark of a known source, evaluating the mark, and finally having another examiner verify the results. When analyzing the mark, the examiner uses a comparison microscope to compare both the unknown and known mark side by side. The examiner can conclude the source of the mark if both the known mark and mark from the crime scene are similar. Furthermore, examiners make conclusions based on the idea that each tool is different and creates a unique impression. Tools have individualized marks because of striations that are on the tool blade. The wear and tear of a tool will create these distinct striations.
Tool mark analysis became admissible in court in the 1900s. Tool mark identification expanded with the establishment of the Federal Bureau of Identification Laboratory in 1932. Even though the justice system accepted tool mark analysis, researchers believed the techniques of pattern evidence identification needed to progress even more. In 1969, researchers established The Association of Firearm and Tool Mark Examiners to encourage future research to enhance this field (Hamby, 1999). Courts successfully used tool mark evidence to sway the jury to make distinct verdicts until the enhancement of DNA analysis. The expansion of DNA analysis caused researchers to condemn the reliability of pattern evidence in court (Petraco, 2012).
The reliability of tool mark evaluation diminished with the ruling of Daubert v. Merrell Dow Pharmaceuticals. This case indicated that the reliability of evidence in court relies on the error rate of the examination technique. Pattern evidence, including tool mark evaluation, does not have a scientific foundation. Error rates and probabilities are not applicable because of this lack of foundation (US Legal, 2015). In 2009, The National Academy of Science reviewed the foundation of pattern evidence. These critiques caused the forensic community to discredit pattern evidence. The judicial system must discredit the validity of tool mark examinations until a quantifiable method is established.
Literature Review
The 2009 National Academy of Science’s critique on the foundation of pattern evidence is a prominent topic in tool mark literature. The premise of this review was to help increase the standards for the analysis of pattern evidence (Petraco et al., 2012). According to Petraco et al. (2012), this critique resulted from the acceptance of DNA evidence by the forensic community. A laboratory can use analytical data including probability and error rates to correctly identify a suspect or victim with DNA analysis. Examiners cannot identify tool marks, unlike DNA, and other pattern evidence using quantifiable data. When observing tool marks, examiners use comparison techniques to identify a tool mark. The scientific foundation of tool mark identification also does not include algebraic proof (Petraco et al., 2012). Acquiring measurable data will validate the use of tool mark analysis (Bachrach et al., 2010). Statistical data of tool marks can confirm that an examiner made a correct conclusion. The critique arose from the idea that examiners cannot conclude the uniqueness of pattern evidence because of the lack of dependable scientific experiments (Page et al., 2011). The small population sizes in the studies that researchers have conducted do not support that all tools make unique marks (2011). Examiners do not have the sufficient amount of data collected to make this conclusion for a whole population. Page et al. (2011) explain that tool mark research is not equipped because the techniques of recreating a mark do not entirely match a real life scene. If an examiner does not use the same surface as the one from the crime scene, he or she cannot accurately say the tool matches the mark. The author explains that this limitation causes researchers to determine how many characteristics a tool can have but not the probability of re-creating an exact mark (2011). The reliability of studies that claim tool marks are unique and can be recurrent decreases with this conclusion. According to Petraco et al. (2012), there is a lack of experiments that study the classification of tool marks based on quantitative values. Moreover, tool mark research is unreliable. Tool mark identification lacks a scientific foundation because it is not reliable. Determining the identity of tools based primarily on its uniqueness rather than numerical data decreases the reliability. Reliability is dependent on the surface a tool will encounter. The type of tool influences reliability. Kumar et al. (2013) conducted a study that demonstrates the difficulty of identifying a wire cutter from marks in telephone cables. The researchers examined 100 cut telephone cables that the police obtained from a scene and a sickle tool that the suspect was carrying. The researchers described that identifying the tool from a mark on a telephone cable was harder than from a single wire. Determining the origin of the tool from the telephone cables was difficult to determine because each cable included multiple smaller wires (2013). Tools can make different marks depending on the type of surface it encounters. This can alter the ability to conclude adequately that a tool mark can match a known mark or tool. Although the surface is a limitation of the case study, the researchers found that the marks matched the sickle tool (2013). While the surfaces that the mark encountered decreased the reliability, the marks are overall unique. In opposition to this conclusion, the authors dictate that a limitation of this study was that not all individuals would have the same equipment to repeat this research (2013). Results differ depending on the facility where an examiner works. Petraco et al. (2012) indicate that the forensic community needs to implement a standard protocol. These rules will assist in strengthening the reliability and repeatability of tool mark research.
Baiker et al. (2015) argued that the foundation of tool mark identification needed to have certain numerical criteria. The authors tested the application of the experimental approach through a controlled study. In this study, the researchers targeted the “variability, reliability, and repeatability of methods used in forensic investigations” (p. 42). The examiners made marks with a screwdriver at different angles using a motorized apparatus into wax. Using this apparatus provided the researchers with reliable data. The research conducted provided the criteria for other researchers to follow when evaluating the reliability of tool mark analysis. The authors also concluded that angle, surface, and penetration of a tool will affect its mark (Baiker et al., 2015). Without criteria, reliability, and repeatability, statistical criteria cannot be universal for all tools and their marks. The absence of a foundation discredits the use of identifying tool marks using comparison techniques.
Another predominant topic in literature is the verification of tool mark identity through uniqueness. A study conducted by the FBI and Intelligent Automation depicted this concept. This study looked at tool marks made under different environments. For example, the researchers observed what occurred when a screwdriver made multiple marks on the same surface and under the same conditions. The researchers determined that examiners could accurately match a tool mark to a particular tool when using 3D devices (2010). Similarly, Bachrach et al. (2010) describe, “the results of this study provide support for the concept that tool marks contain measurable features that exhibit a high degree of individuality” (p. 348-349). The FBI used an algorithm to determine the tools’ uniqueness (2010). The algorithm indicates that examiners can use methods containing analytical data to determine the unique nature of a tool.
Subjectivity is another predominant issue when it comes to pattern evidence. Authors disclaim the use of pattern evidence identification because an examiner can be subjective. Subjectivity correlates to the amount of experience an examiner has. More experience can cause an examiner to miss the small details of a mark when he or she is examining it (Nichols, 2007). The amount of experience often correlates to the examiner making mistakes subconsciously, which weakens the dependability of an examiner to explain his or her conclusions while in court. Similarly, the identification of tool marks relies on assumptions. The examiners assume that he or she will continue to remember every characteristic of the marks they studied (Page et al., 2011). This assumption correlates to the idea that all tools create unique marks. Page et al. (2011) also explain that examiners endorse the idea that pattern evidence is unique because researchers claimed that no two tool marks match. The examiners claim this because they have not observed identical marks made by different tools (2011). An inspector would need to compare multiple sets of tools to confirm this claim accurately. Overall, authors discredit the use of tool mark identification based on the claim that no evidence is unique.
Creating a more objective foundation for tool mark evaluation will improve the acceptance of pattern evidence in court. Nichols (2007) explains that the use of comparison microscopes reduces the amount of subjectivity. Although using comparison microscopes increases the objectivity of observing tool marks, an examiner’s bias is still present. Nichols (2007) proposes the use of proficiency tests to help fix subjectivity. He describes that “proficiency tests can offer the court a reliable, practical indicator of how often the profession, using accepted procedures, practices, and controls, makes a false identification” (p. 592). The proficiency tests will help determine the error rate of examining tool marks. This error rate is a requirement for evidence in court stated in the Daubert ruling (2007). With the addition of proficiency tests, the subjectivity of examiners will decline while tool mark evaluation will become more reliable.
Conclusion
The foundation for tool mark examination is problematic because it is not scientific. A scientific foundation proliferates the reliability of pattern evidence when used in court. The judicial system should not use tool mark analysis in court until a more scientific technique is determined for identification. The forensic community should start by conducting more research. This research could determine a mathematical algorithm to use while comparing tool marks. Examiners should determine the error rates for tool mark evaluation, which will increase the acceptability of using pattern evidence in court. The current foundation includes subjectivity. Quantitative data, including probability and error rates, will help tool mark evaluation become more objective. The forensic community should not use ACE-V when identifying tool marks.
References
Bachrach, B., Jain, S., & Koons, R.D. (2010). A statistical validation of the individuality and repeatability of striated tool marks: Screwdrivers and tongue and groove pliers. Journal of Forensic Sciences, 55(2), 348-357.
Baiker, M., Pieterman, R., & Zoon, P. (2015). Tool mark variability and quality depending on the fundamental parameters: Angle of attack, tool mark depth, and substrate material. Forensic Science International, 251(1), 40-49.
Hamby, J.
E. (1999). The history of firearm and tool mark identification. The Association of Firearm and Tool Mark Examiners Journal, 31(3).
Kumar, R., Patial, N., & Singh, S. (2013). Identification of tool marks of a sickle on a telephone cable. Journal of Forensic Sciences,58(1), 217-219.
Page, M., Taylor, J., & Blenkin, M. (2011). Uniqueness in the forensic identification sciences Fact or fiction? Forensic Science International, 206(1), 12-18.
Petraco, N. D. K., Shenkin, P., Speir, J., Diaczuk, P., Pizzola, P. A., Gambino, C., & Petraco, N. (2012). Addressing the National Academy of Sciences' challenge: A method for statistical pattern comparison of striated tool marks. Journal of Forensic Sciences, 57(4), 900-911.
Nichols, R. G. (2007). Defending the scientific foundations of the firearms and tool mark identification discipline: Responding to recent challenges. Journal of Forensic Sciences (Wiley-Blackwell), 52(3), 586-594.
US Legal (2015). The Daubert Decision and the Supreme Court’s Construction of Rule 702. Retrieved from
http://forensiclaw.uslegal.com/litigation-history-of-forensic-evidence/the-daubert-decision-and-the-supreme-court%E2%80%99s-construction-of-rule-70