Thursday, October 27, 2011

Forensic technique that was 'judicially accepted for decades' called 'highly unreliable'

Having recently examined advice being given to judges on how to interpret the science or lack thereof behind ballistics evidence, I thought I'd continue in that vein with a discussion of "microscopic hair analysis" from the same source (see the online version here, beginning on p. 112). Both analyses are drawn from the third edition of the "Reference Manual on Scientific Evidence," produced by the Federal Judicial Center and the National Research Council of the National Academies of Science. While microscopic hair evidence has been "judicially accepted for decades," says the manual, you can add it to the list as "another forensic identification discipline that is being reappraised today."

The 2009 NRC-NAS report contained an assessment of hair analysis, says the manual, "observing that there are neither 'scientifically accepted [population] frequency' statistics for various hair characteristics nor 'uniform standards on the number of features which must agree before an examiner may declare a 'match'" The report concluded that "testimony linking microscopic hair analysis with particular defendants is highly unreliable," recommending DNA testing of the evidence where practical.

Hair analysis is better at excluding suspects than individuating them: E.g., they could tell if someone had blonde with straight hair vs. curly hair from an African American, whether hair had been dyed, etc.. But even the best estimates of the technique's accuracy say the possibility of a false match is 1 in 4,500 for scalp hair and 1 in 800 for pubic hair. Other proficiency studies, have found much higher "false positive" rates - sometimes above 12%. Even more damning, an examination of the first 137 DNA exonerations found that 38% included invalid hair comparison testimony, with most of the cases involving "invalid individualizing claims."

In the courtroom, prior to the US Supreme Court's Daubert opinion in 1993, "an overwhelming majority of courts accepted expert testimony that hair samples are microscopically indistinguishable." However, 1990 decision in North Carolina held it an error to admit testimony that"it would be improbable that these hairs would have originated from another individual." The court held that such testimony amounted "effectively to positive identification of the defendant."

The first, significant post-Daubert challenge to such evidence came in Williamson v. Reynolds out of Oklahoma in 1995, where a district court was "unsuccessful in its attempt to locate any indication that expert hair comparison testimony meets any of the requirements of Daubert." Before retrial, that particular defendant was exonerated by exculpatory DNA evidence.

The section of the manual on microscopic hair analysis concludes:
Post-Daubert, many cases have continued to admit testimony about microscopic hair analysis. In 1999, one state court judicially noticed the reliability of hair evidence, implicitly finding this evidence to be not only admissible but also based on a technique of indisputable validity. In contrast, a Missouri court reasoned that, "without the benefit of population frequency data, an expert overreached in opining to "a reasonable degree of certainty that the unidentified hairs were in fact from" the defendant. The NRC report commented that there appears to be growing judicial support for the view that "testimony linking microscopic hair analysis with particular defendants is highly unreliable.
RELATED: Go here to read the manual online or purchase a hardcopy. See also: Judges cautioned against reliance on overstated ballistics testimony.

4 comments:

Anonymous said...

Within the scientific community, it has always been recogized that microscopic hair comparisons could not reliably be used to individualize hair samples. It is an exclusionary tool, and analyts have routinely testified to that.

The crux of the problem with hair analysis has always been with how lawyers mis-portray the results and conclusions. There can be a world of difference between prosecutor's summations and the actual reports and testimony of the experts.

As an exclusionary tool, microscopic hair comparisons are excellent. It is unfortunate that over the past decade there has been a movement away from doing microscopic hair comparisons at all, and towards going straight to DNA testing of hair. That is all well and good if a hair has a good root that gives a good STR profile. But if the hair can only be profiled with mitochondrial DNA then it is possible that there could be a mtDNA match between two hair samples from different people that could have been excluded if there had been a microscopic examination. The most common mtDNA profile is shared by 7-8% of people, and many of those people have microscopically dissimilar hairs. Because mtDNA testing is destructive, there may be no opportunity for microscopic comparisons after the fact.

Anonymous said...

Question: What do hair analysis, dog scent line ups and polygraphs all have in common in the Texas Court Systems? Answer: While all three items have been proven as quackery, DA's and judges still use them and starry eyed jurors believe in them.

On another note...Grits, where are all of the Halloween / Sex Offender bullsh** and hype stories this year? Those are always a hoot! I like reading about how everyone likes to get tough on sex offenders this time of the year, for no apparent reason other than to do some grandstanding for themself.

Anonymous said...

This comment really is for the scientific community but rather aimed at certain programs and procedures which have been developed by NHTSA. The Horizontal Gaze Nystagmus (HGN) test used for determining whether someone is potentially intoxicated is the biggest bunch of crap to come down the pike in a long time. Someone should really investigate and analyze whether this so called test is legitimate or not.

Anonymous said...

06:29 - I'm a scientist but I'm not up on the research behind the HGN test. But thinking about HGN like any other test for any condition, there are two errors associated with the test (this is true for any test) - a false positive error, and a false negative error. Those errors are never zero, and they compete against each other. If you want a very low occurrence of false positives (test positive when driver is not intoxicated), then you will have to accept a high occurrence of false negatives (test negative when the driver is intoxicated).

The question that needs to be asked at the front end is, what is the acceptable level of false positives and false negatives. That depends on who is deciding.

From the perspective of the non-intoxicated driver being tested, what is wanted is a very low false positive error rate. But what is good enough? Is 1-in-100 good enough? Or 1-in-1,000?

From the perspective of the intoxicated driver being tested, what is wanted is a high false negative error rate. But again, what is good enough?

From the perspective of a parent driving around town with a car full of children, what is wanted is a very low false negative error rate. But still again, what is good enough?

What constitutes "good enough" is never an issue of science. It is an issue of public policy and personal preference.