Studies have shown that invalid or inaccurate forensic science is a factor in almost half of wrongful convictions. In the approximately 354 cases where DNA later exonerated an Innocence Project client, poor forensic science contributed to most of the underlying convictions.
Poor forensic science gets into criminal cases in several ways. Analysts may commit errors when collecting the evidence or performing tests. Witnesses may present the findings in a misleading or exaggerated way. Or, the technique itself may be scientifically invalid.
The forensic science industry and the justice system have been grappling with this issue for a decade or more, especially since the National Academy of Sciences published a comprehensive study in 2009 of all the forensic disciplines and found nearly all lacking scientific validation.
The NAS report included comprehensive suggestions to test the validity of each of the forensic disciplines and to standardize interpretation of unknown samples to reduce the subjectivity of the examiner. The report also recommended making crime labs independent of law enforcement and that they employ “blind testing” as is used in all real science, to reduce the potential of bias affecting the results. The law enforcement community has fought or ignored virtually all of the recommendations.
Take fingerprint analyses, for example. We’ve all been told that every set of fingerprints is unique. Did you know that no actual studies have been done to prove that? The NAS study noted that fingerprint analysis is a subjective field, with no clear standards for how many points of similarity are required to call a match or when the analyst may ignore “points of dissimilarity” between two prints.
Studies show that results are not repeatable from one examiner to another, and that examiners may even disagree with their own past conclusions when they are presented to them again at a later time. Moreover, there are no population statistics on fingerprints, and the fingerprint analyst field resists developing those studies despite the NAS conclusion they could do so.
Similarly, there is no valid science backing up bitemark analysis. And, the FBI has recently admitted that microscopic hair comparison analysis is not scientifically valid.
About a year ago, Attorney General Jeff Sessions and his Department of Justice disbanded the National Commission on Forensic Science, an independent panel made up of scholars and forensic science experts that had been advising the agency on how to improve scientific standards in the field. Instead, decisions about what techniques are reliable enough to use in court and how to accurately present those techniques has been brought in-house and are being decided by a smaller, less diverse group of law enforcement and prosecutors.
Last month, the Department of Justice issued a new policy it claims addresses questions about the validity and reliability of forensic analysis in the court room. It includes a specific set of policies for fingerprint analysis, but none for the other questionable fields, including hair, bite mark and ballistics/tool mark comparison. The new policy fails to satisfy many concerns about fingerprint analysis, although it does include some new uniform language to be used in reports and testimony meant to ensure that witnesses accurately describe what each test actually reveals and to what degree of certainty.
That is an improvement over existing practice, where some analysts falsely claim there is “zero rate of error,” or claim they are “100% certain” the prints came from the suspect and are “unique” to that person. The new policy bars such opinion testimony because it falsely implies the opinion is based on a statistically-derived or verified measurement of the world’s population.
However, the policy does nothing to standardize how and when an analyst may conclude the prints match. And it does not improve transparency by requiring the analyst to document the reasons for their findings. The 2009 NAS report noted:
At the very least, sufficient documentation is needed to reconstruct the analysis, if necessary. By documenting the relevant information gathered during the analysis, evaluation, and comparison of latent prints and the basis for the conclusion (identification, exclusion, or inconclusive), the examiner will create a transparent record of the method and thereby provide the courts with additional information on which to assess the reliability of the method for a specific case. Currently, there is no requirement for examiners to document which features within a latent print support their reasoning and conclusions.
Importantly, the DOJ’s new policy does nothing to address how forensic science is validated, performed or presented in state-level cases, where approximately 90% of criminal cases are prosecuted.
No defendant should ever be convicted based on invalid or inaccurate scientific testimony. The public’s faith in justice cannot be secured until the fields of forensic analysts divorce themselves from their long-entrenched, too-cozy relationship with law enforcement and they apply validated, unbiased science to techniques offered in a court of law. This DOJ policy fails to do that.