“It has long been an axiom of mine that the little things are infinitely the most important.” So said the fictional detective, Sherlock Holmes. Armed with his finely honed skills of backwards reasoning, his trademark ability to solve unsolvable crimes often hinged on his revealing evidence too small to be noticed.
Holmes was an inspiration for the very founders of modern day forensic science. As the decades passed and the tools in their armoury grew, so too did the sheen of invincibility that surrounded their discipline. But there was a crucial chink in their methods that had been overlooked: subjectivity.
While the likes of Holmes’s successors in detective fiction may lead us to believe that forensic evidence is based on precise deduction, all too often it relies on a scientist’s personal opinion, rather than hard fact.
Science on trial
Consider the following case. In December 2009, Donald Gates walked out of his Arizona prison with $75 and a bus ticket to Ohio. After serving 28 years for a rape and murder he didn’t commit, he was a free man. Now the spotlight began to shift to the forensic technique that put him there: microscopic hair analysis.
Human hair is one of the most common types of evidence found at crime scenes. During the 80s and 90s, forensic analysts in the US and elsewhere often looked to the physical differences between hairs to determine whether those found at a crime scene matched hairs from a suspect – like Donald Gates.
When he stood trial in 1982, an FBI analyst called Michael Malone testified that hairs found on the body of the murder victim – a Georgetown University student called Catherine Schilling – were consistent with Donald Gates’ hairs. He added that the probability they came from anyone else was one in 10,000.
“That’s very compelling evidence, particularly when it comes from a witness wearing a white laboratory coat,” says Peter Neufeld, co-founder of the Innocence Project, a New York-based non-profit organisation that uses DNA evidence to overturn wrongful convictions.
However, hair analysis is not purely objective; I might think two hairs look identical, but you might disagree. Even if we agree that two hairs match, no-one has ever figured out how many other hairs might be similarly indistinguishable from one another. “When a person says that the probability is one-in-10,000, that’s simply a made-up number,” says Neufeld. “There’s no data to support it.”
Donald Gates was finally exonerated when DNA testing revealed that the hairs didn’t belong to him after all. Two similar exonerations followed soon afterwards. As a result of these cases, the FBI is now reviewing several thousand cases in which its scientists may have offered similarly misleading testimony. Last month, it announced that of the 268 cases it has reviewed so far that went to trial, 96% them involved scientifically invalid testimony or other errors by FBI agents. Among those convicted, 33 received death sentences, and nine have already been executed.
The FBI’s review won’t necessarily overturn the convictions, but it does mean that they need to be reconsidered carefully. Lawyers scrutinising these cases must work out what other evidence was presented in court; if they hinged on flawed hair testimony, retrials and exonerations may follow. In cases where the original physical evidence still exists, that DNA testing may shed new light on the truth.
Damning report
Even trusted lines of evidence, such as fingerprint analysis, are not water-tight. Research has shown that the same fingerprint expert can reach a different conclusion about the same fingerprints depending on the context they’re given about a case.
Based in part on these findings, in 2009 the National Academy of Sciences in the US published a report on the state of forensic science. Commissioned in response to a string of laboratory scandals and miscarriages of justice, its conclusions were damning. “Testimony based on faulty forensic science analyses may have contributed to the wrongful conviction of innocent people,” it said. “In a number of disciplines, forensic science professionals have yet to establish either the validity of their approach or the accuracy of their conclusions.”
The report was a wake-up call, not just for forensic scientists in the US, but around the world. “What it exposed were significant scientific deficiencies across many of the different methods that we use, both to examine and interpret different types of evidence,” says Nic Daeid, a professor of forensic science at the University of Dundee in Scotland.
Of all lines of forensic evidence, DNA analysis was considered to be the most objective. Resting on complex chemical analysis, it seems stringently scientific – a gold-standard for how forensic science should be done. Yet perhaps juries should not be too quick to trust the DNA analyses they see in court.
In 2010, while working as a reporter for New Scientist magazine, I teamed up with Itiel Dror from University College London, and Greg Hampikian from Boise State University in Idaho, to put this idea of DNA’s objectivity to the test.
We took DNA evidence from a real-life case – a gang-rape in Georgia, US – and presented it to 17 experienced analysts working in the same accredited government lab in the US.
In the original case, two analysts from the Georgia Bureau of Investigation concluded that the man who was ultimately convicted of the crime, Kerry Robinson, "could not be excluded" from the crime scene sample, based on his DNA profile. But when the evidence was shown to our 17 analysts, they reached very different conclusions; just one analyst agreed that Robinson "cannot be excluded". Four analysts said the evidence was inconclusive and 12 said he could be excluded.
Yet just because forensic science is subjective, this doesn’t mean it should be disregarded; it can still yield vital clues that can help to catch and convict murderers, rapists, and other criminals. “Subjectivity isn’t a bad word,” says Dror. “It doesn’t mean that the evidence isn’t reliable, but it is open to bias and contextual influences.”
Blind judgement
What’s needed are additional safeguards to shield forensic examiners against irrelevant information that might skew their judgement. A first step is to ensure they aren’t given irrelevant information, such as knowing that witnesses have placed the suspect at the crime scene, or that he has previous convictions for similar crimes. Another safeguard is to reveal the relevant information sequentially – and only when it is needed. “We need to give them the information that they need to do their job when they need it, but not extra information that’s irrelevant to what they’re doing and which could influence their perception and judgement,” says Dror.
What’s needed are additional safeguards to shield forensic examiners against irrelevant information that might skew their judgement. A first step is to ensure they aren’t given irrelevant information, such as knowing that witnesses have placed the suspect at the crime scene, or that he has previous convictions for similar crimes. Another safeguard is to reveal the relevant information sequentially – and only when it is needed. “We need to give them the information that they need to do their job when they need it, but not extra information that’s irrelevant to what they’re doing and which could influence their perception and judgement,” says Dror.
In the US at least, this is starting to happen: a national commission on forensic science has been established, with the goal of strengthening the field – and this includes looking at human factors like cognitive bias. But similar strategies are needed elsewhere if forensic science is to rebuild its tattered reputation.
When it comes to deduction and proof, there is still much we can learn from Arthur Conan Doyle’s hero. As Sherlock Holmes also once said: "Eliminate all other factors, and the one which remains must be the truth."
No comments:
Post a Comment