Similar results have emerged from other experiments. When Dror gave forensic analysts more typical stories about fingerprints they’d already reviewed, such as informing them that a suspect had already confessed, the new stories were able to get two-thirds of analysts to reverse their previous conclusions at least once. In an email, Dror noted that the FBI has replicated this basic finding, showing that in roughly 10 percent of cases examiners reverse their findings even when given the exact same prints. They are consistently inconsistent.

If the flaws of forensics were limited to fingerprints and other forms of evidence requiring visual interpretation, such as bite marks and hair samples, that would still be extremely worrying. (Fingerprints have been a crucial police tool since the French detective Alphonse Bertillon used a bloody print left behind on a pane of glass to secure a murder conviction in 1902.) But Dror and colleagues have shown that these same basic failings can even afflict the gold-standard of forensic evidence: DNA.

The experiment went like this: Dror and Greg Hampikian presented DNA evidence from a 2002 Georgia gang rape case to 17 professional DNA examiners working in an accredited government lab. Although the suspect in question (Kerry Robinson) had pleaded not guilty, the forensic analysts in the original case concluded that he could not be excluded based on the genetic data. This testimony, write the scientists, was “critical to the prosecution.”

But was it the best interpretation of the evidence? After all, the DNA gathered from the rape victim was part of a genetic mixture, containing samples from multiple individuals. In such instances, the genetics become increasingly complicated and unclear, making forensic analysts more likely to be swayed by their presumptions and prejudices. And because crime labs are typically part of a police department, these biases are almost always tilted in the direction of the prosecution. For instance, in the Avery case, the analyst who identified the victim’s DNA on a bullet fragment had been explicitly instructed by a detective to find evidence that the victim had been “in his house or his garage.”

To explore the impact of this potentially biasing information, Dror and Hampikian sent the DNA evidence from the Georgia gang rape case to additional examiners. The only difference was that these forensic scientists did the analysis blind - they weren’t told about the grisly crime, or the corroborating testimony, or the prior criminal history of the defendants. Of these 17 additional experts, only one concurred with the original conclusion. Twelve directly contradicted the finding presented during the trial – they said Robinson could be excluded - and four said the sample itself was insufficient.

These inconsistencies are not an indictment of DNA evidence. Genetic data remains, by far, the most reliable form of forensic proof. And yet, when the sample contains biological material from multiple individuals, or when it’s so degraded that it cannot be easily sequenced, or when low numbers of template molecules are amplified, the visual readout provided by the DNA processing software must be actively interpreted by the forensic scientists. They are no longer passive observers – they have become the instrument of analysis, forced to fill in the blanks and make sense of what they see. And that’s when things can go astray.

The errors of forensic analysts can have tragic consequences. In Convicting the Innocent, Brandon Garrett’s investigation of more than 150 wrongful convictions, he found that “in 61 percent of the trials where a forensic analyst testified for the prosecution, the analyst gave invalid testimony.” While these mistakes occurred most frequently with less reliable forms of forensic evidence, such as hair samples, 17 percent of cases involving DNA testing also featured misleading or incorrect evidence. “All of this invalid testimony had something in common,” Garret writes. “All of it made the forensic evidence seem like stronger evidence of guilt than it really was.”

So what can be done? In a recent article in the Journal of Applied Research in Memory and Cognition, Saul Kassin, Itiel Dror and Jeff Kukucka propose several simple ways to improve the reliability of forensic evidence. While their suggestions might seem obvious, they would represent a radical overhaul of typical forensic procedure. Here are the psychologists top five recommendations:

1) Forensic examiners should work in a linear fashion, analyzing the evidence (and documenting their analysis) before they compare it to the evidence taken from the target/suspect. If their initial analysis is later revised, the revisions should be documented and justified.

2) Whenever possible, forensic analysts should be shielded from potentially biasing contextual information from the police and prosecution. Here are the psychologists: “We recommend, as much as possible, that forensic examiners be isolated from undue influences such as direct contact with the investigating officer, the victims and their families, and other irrelevant information—such as whether the suspect had confessed. “

3) When attempting to match evidence from the field to that taken from a target/suspect, forensic analysts should be given multiple samples to test, and not just a single sample taken from the suspect. This recommendation is analogous to the eyewitness lineup, in which eyewitnesses are asked to identify a suspect among a pool of six other individuals. Previous research looking at the use of an "evidence lineup" with hair samples found that introducing additional samples reduced the false positive error rate from 30.4 percent to 3.8 percent.

4) When a second forensic examiner is asked to verify a judgment, the verification should be done blindly. The “verifier” should not be told about the initial conclusion or given the identity of the first examiner.

5) Forensic training should also include lessons in basic psychology relevant to forensic work. Examiners should be introduced to the principles of perception (the mind is not a camera), judgment and decision-making (we are vulnerable to a long list of biases and foibles) and social influence (it’s potent).

The good news is that change is occurring, albeit at a slow pace. Many major police forces – including the NYPD, SFPD, and FBI – have started introducing these psychological concepts to their forensic examiners. In addition, leading forensic organizations, such as the US National Commission on Forensic Science, have endorsed Dror’s work and recommendations.

But fixing the practice of forensics isn’t enough: Kassin, Dror and Kukucka also recommend changes to the way scientific evidence is treated in the courtroom. “We believe it is important that legal decision makers be educated with regard to the procedures by which forensic examiners reached their conclusions and the information that was available to them at that time,” they write. The psychologists also call for a reconsideration of the “harmless error doctrine,” which holds that trial errors can be tolerated provided they aren’t sufficient to reverse the guilty verdict. Kassin, Dror and Kukucka point out that this doctrine assumes that all evidence is analyzed independently. Unfortunately, such independence is often compromised, as a false confession or other erroneous “facts” can easily influence the forensic analysis. (This is a possible issue in the Avery case, as Brendan Dassey’s confession – which contains clearly false elements and was elicited using very troubling police techniques – might have tainted conclusions about the other evidence. I've written about the science of false confessions here.) And so error begets error; our beliefs become a kind of blindness.

It’s important to stress that, in most instances, these failures of forensics don't require intentionality. When Strang observes that injustice is not necessarily driven by malice, he's pointing out all the sly and subtle ways that the mind can trick itself, slouching towards deceit while convinced it's pursuing the truth. These failures are part of life, a basic feature of human nature, but when they occur in the courtroom the stakes are too great to ignore. One man’s slip can take away another man’s freedom.

Dror, Itiel E., David Charlton, and Ailsa E. Péron. "Contextual information renders experts vulnerable to making erroneous identifications." Forensic Science International 156.1 (2006): 74-78.

Ulery, Bradford T., et al. "Repeatability and reproducibility of decisions by latent fingerprint examiners." PloS one 7.3 (2012): e32800.

Dror, Itiel E., and Greg Hampikian. "Subjectivity and bias in forensic DNA mixture interpretation." Science & Justice 51.4 (2011): 204-208.

Kassin, Saul M., Itiel E. Dror, and Jeff Kukucka. "The forensic confirmation bias: Problems, perspectives, and proposed solutions." Journal of Applied Research in Memory and Cognition 2.1 (2013): 42-52.