Two murders rocked Noxubee County, Mississippi, in the early 1990s. In each case, a young girl was abducted from her home, raped, murdered, and then dumped in a nearby body of water. Although the cases were startlingly similar, a different man was accused of each crime. Even though he had an alibi, Levon Brooks was pegged for killing 3-year-old Courtney Smith, based on the fact that he’d previously dated Smith’s mother. Kennedy Brewer was charged with the murder of 3-year-old Christine Jackson, whose mother he was dating. The state’s case against each man was based on the findings of Steven Hayne, Mississippi’s de facto medical examiner, and Michael West, a dentist and self-styled bite mark expert. In each case, Hayne identified what he believed to be bite marks on the victim’s body and referred the evidence to West. This was not unusual. For decades, Hayne and West worked in tandem as the go-to experts for police and prosecutors in Mississippi. The doctors were unequivocal in court about the medical evidence. At Brooks’s trial, West told the jury that “it could be no one but Levon Brooks that bit this girl’s arm.” In Brewer’s case, West pulled out one of his signature lines, saying that marks found on the victim’s body were “indeed and without a doubt” made by Kennedy Brewer. Brooks was sentenced to life in prison. Brewer was sent to death row. But Hayne and West were wrong. Brooks and Brewer were innocent. Instead, a man named Justin Johnson, who had a history of committing similar crimes, was responsible for both murders. Eventually, DNA would tie him to the murder of Jackson and he would confess to killing Smith. He denied biting either victim.

Brooks and Brewer were wrongly convicted based on questionable bite mark “science,” which has been implicated in more than two dozen wrongful convictions or indictments nationwide. Perhaps no one in the country better represents the dangers of junk forensics than West and Hayne, who turned themselves into jack-of-all-trades experts and made extraordinary amounts of money by dominating the state’s forensic death investigation system — Hayne did three-quarters of the autopsies in Mississippi, an impossible workload. The pair sent countless people to prison, and, in some cases, allowed killers like Johnson to go unpunished for years.

Image: Hachette Book Group

In their new book “ The Cadaver King and the Country Dentist ,” journalist Radley Balko and lawyer Tucker Carrington, director of the Innocence Project at the University of Mississippi School of Law, explore in rich and exhaustively reported detail how the criminal justice system has failed people like Brooks and Brewer, and how it encouraged an environment in which Hayne and West essentially operated unchecked. But what happened in Mississippi is not unique. Forensics scandals have erupted across the country, while reports like those from the National Academy of Science in 2009 and the President’s Council of Advisors on Science and Technology in 2016 raised serious concerns about the validity and reliability of a host of forensics practices used for decades to send people to prison. Few — if any — of those concerns have been addressed, yet the Trump administration has basically turned its back on forensics reform, while courts across the county continue to allow questionable forensics into evidence. In an interview with The Intercept, Balko and Carrington explained why certain forensics are little more than junk science, how the courts vet supposedly scientific evidence, and how Hayne and West navigated this system to their benefit — leaving a devastating legacy in their wake. I figure we can start out with the basics. What is bite mark matching? Radley Balko: Bite mark analysis is when a forensic analyst looks at bite marks on a body — either the body itself or sometimes even in photos — and attempts to match them to the dental mold taken of a suspect. But it rests on two flawed premises. The first is that human dentition is unique — that we all leave a different kind of bite. There’s just no scientific research to back that up. And the second premise is that human skin is capable of recording bites in a way that preserves the kind of detail that can distinguish one person from another. Not only is there no scientific research to back that up, the research that has been done suggests that this isn’t the case. And, if you think about it, it’s pretty intuitive. Human skin is soft, spongy; people start healing immediately after a wound is inflicted; and people heal at different rates. And in two cases we write about in the book, the little girls were exposed to the elements, they were submerged in water. One of them had been embalmed. So the idea that you could find these tiny, almost microscopic details in these wounds that you could trace to small little facets of someone’s teeth, to the exclusion of everyone else, is just absurd. Bite mark analysis is just one of the pattern-matching forensic practices. Tell me how they relate to each other in terms of whether there is any real science underpinning them. RB: Pattern matching in general is just what it sounds like. It’s where an analyst will look at evidence from a crime, or an alleged crime, and then look at another piece of evidence that ties that crime to a suspect. So it could be hair fibers found at the crime scene versus hair taken from a suspect, or carpet fibers from a crime scene and comparing them to carpet fibers found on the shoes of a suspect. And they basically just eyeball it. There’s no calculation to be done, there’s no margin for error. It’s entirely subjective. In the pattern-matching disciplines, you regularly get defense and prosecution experts who give contradictory testimony. And then it boils down to who’s better at persuading juries. Who’s more charismatic, who’s more persuasive. And usually, the skillset it takes to be persuasive on a witness stand isn’t necessarily the same skillset it takes to be a sound and scientific analyst. And that’s really part of the problem. What is the standard for deciding whether this stuff is allowed into evidence? How is it supposed to work, and how does it work in practice? Tucker Carrington: There was a case which lasted the better part of the 20th century out of the D.C. Court of Appeals called Frye. And essentially, Frye said it was the “general acceptance” theory, which is that if a scientific discipline or expert opinion was generally accepted in a given field, then it was presumptively admissible. In the 1990s, a case by the name of Daubert v. Merrell Dow Pharmaceuticals was heard in the U.S. Supreme Court, at about the same time that the federal rules of evidence were changed — Rule 702, which is the rule for admission of expert testimony. The feeling was that the “general acceptance” test could be too narrow. There were theories which, for all kinds of reasons, hadn’t gained general acceptance but were nonetheless valid opinions. They had scientific bases. And so the court in Daubert essentially tracked and validated the recently adopted Rule 702 and said, we’re rejecting the Frye test; it doesn’t have to be generally acceptable, but it does have to meet a nonexhaustive set of criteria [e.g., whether the theory or technique had been and could be tested; whether it had been subjected to peer review; whether there was a known error rate]. And then, what the Supreme Court did was essentially made — and this is the term of art that people know — it made trial judges “gatekeepers” of this type of evidence. It was the trial judge who had to sit and listen to both sides argue for the admissibility or inadmissibility of the evidence. If the evidence met the listed nonexhaustive criteria, it would be admissible, and if not, not. So, in a nutshell, that is the standard. There are a few states that still use the Frye standard, but most states and the federal courts use the Daubert v. Merrell Dow standard. Is it working to keep junk science out of court? RB: It isn’t working at all. It does sometimes work in civil cases, where both sides tend to be pretty well funded and judges will sometimes even hire a person to educate the court on given issues. In criminal cases, you just don’t see the same level of skepticism. And it’s entirely predictable, right? We’re asking judges who are trained to do legal analysis, and that’s what they’ve done their entire lives, and we’re asking them to now suddenly do scientific analysis. And when it comes to the scientific analysis, they’re doing it exactly like you would expect judges to: They look at precedent. And that isn’t how science works, right? Science is always questioning the past and revising theories and changing things based on new knowledge. Whereas the courts strive for consistency and finality, so they’re always looking to what other courts have done, partly just as an ass-covering measure. If you’re a judge and you don’t really know the science very well, an easy thing to do is look to see what other judges have done, because if other people have done it too, nobody’s gonna call you out for doing it. When a particular brand of forensics isn’t scientific, there’s a strong incentive for a judge not to call it out, because they’re going out on a limb. TC: I think one dynamic that should be highlighted, because it occurred in the Brooks and Brewer cases, and frankly, almost every bite mark case I’ve read, is that when the prosecution announces that it has a bite mark expert, to the extent that there’s a reaction, the defense reflexively goes out and hires its own bite mark expert. The problem there is they already missed the boat, because when you allow a state expert to come in and offer up testimony, then you allow the defense to do the same, you’ve validated the discipline. And as you know, the problem with some of these disciplines is they are not valid. What should be happening is if the state announces it has an expert in whatever discipline, the defense says, I’m not gonna hire a bite mark expert because even my bite mark expert, if he or she wants to continue testifying, is gonna say, “Oh, no, this is a valid discipline! It just turns out that other person’s opinion is wrong.” No, what you need to do is hire someone at a university who teaches the history of science. And they come in and say, here are the fundamental constructs of a valid science: error rate, testability, reproducibility, peer review, et cetera, whatever it happens to be. And this discipline doesn’t meet it.

Authors Radley Balko, left/top, and Tucker Carrington.Left: Hachette Book Group. Right: Kevin Bain via Hachette.