Shares

Its authors boast that it is one of the ten most downloaded papers from the British Medical Journal (BMJ). That makes it even more unfortunate that the conclusions of the paper are directly at odds with the findings of the paper. Outcomes of planned home births with certified professional midwives: large prospective study in North America by Kenneth Johnson and Bettye Ann Davis is the premier paper on the safety of American homebirth. It claims to show that homebirth is as safe as hospital birth, but actually shows that homebirth has nearly triple the neonatal death rate of hospital birth for comparable risk women.

Johnson and Daviss, in collaboration with the Midwives Alliance of North America (MANA), the organization of American homebirth midwives, collected data on all homebirths attended by Certified Professional Midwives (CPMs, homebirth midwives, as distinct from CNMs, Certified Nurse Midwives) in the year 2000. Then the authors compared the outcomes for interventions and for neonatal deaths with a hospital group.

According to Johnson and Davis, when analyzing the different intervention rates of home and hospital:

We compared medical intervention rates for the planned home births with data from birth certificates for all 3 360 868 singleton, vertex births at 37 weeks or more gestation in the United States in 2000, as reported by the National Center for Health Statistics [Births: final data for 2000. National vital statistics reports. Martin JA, Hamilton BE, Ventura SJ, Menacker F, Park MM. Hyattsville, MD: National Center for Health Statistics, 2002;50(5)]

They used singleton, vertex births at 37+ weeks as a proxy for low risk women. They found, not surprisingly, that intervention rates are lower for homebirth. Then they turned to neonatal mortality rates. They should have compared the neonatal mortality rate of the homebirth group to the neonatal mortality rate of the hospital birth group, but they did not. Instead, they compared homebirth deaths to hospital births in a variety of out of date studies extending back more than 20 years.

The authors conclude:



Planned home birth for low risk women in North America using certified professional midwives was associated with lower rates of medical intervention but similar intrapartum and neonatal mortality to that of low risk hospital births in the United States.

But the authors never compared mortality rates to low risk hospital birth in 2000, because that would have led to a very different conclusion. Using the same dataset that Johnson and Daviss used, we find hospital neonatal death rate for white, babies at 37+ weeks of 0.9/1000. This is not corrected for congenital anomalies, pre-existing medical conditions, pregnancy complications or multiple births. The neonatal mortality rate for white, singleton babies at 37+ weeks is 0,72/1000. The true rate is substantially lower. Nonetheless, we can make an important comparison. Johnson and Daviss reported a neonatal death rate at homebirth of 2.7/1000 (uncorrected for congenital anomalies, breech or twins). In other words, the neonatal death rate of CPM attended homebirths in 2000 was nearly triple the rate for low to moderate risk hospital births in 2000.

Simply put, the authors pulled a bait and switch. They claim to be comparing homebirth in 2000 with hospital birth in 2000. Indeed, they are comparing intervention rates for homebirth in 2000 with hospital birth in 2000, but when it comes to neonatal deaths, they used data extending back to 1969. It was the only way to make homebirth look safe by comparison.

Why might the authors deliberately intend to deceive readers? It turns out that Johnson and Daviss are not impartial researchers, though you would not know that from reading the paper. Johnson is the former Director of Research for the Midwives Alliance of North America (MANA) Statistics and Research Committee. Daviss, his wife, is a homebirth midwife. The paper does acknowledge that the study was funded by Foundation for the Advancement of Midwifery, a homebirth advocacy group.

Johnson and Daviss have created a website, Understanding Birth Better, to answer criticism. However, their explanation for the bait and switch is not merely disingenuous, it is an outright lie.

… Since our article was submitted for publication in 2004, the NIH has published analysis more closely comparable than was available at that time, and some have tried to use it as a comparison. While we still do not offer the comparison as a completely direct one, … it is the closest we have …

As they say in politics, it’s not the crime, but the cover up. Johnson and Daviss acknowledge that they used the wrong group for comparison with homebirth, but claiming that the correct data was not available at that time. That is flat out false.The relevant data was published in 2002, long before their paper was submitted (Infant Mortality Statistics from the 2000 Period Linked Birth/Infant Death Data Set, published August 29, 2002). Moreover, even before publication of the analysis, Johnson and Daviss had the raw data in their possession. They used that raw data from 2000 to calculate the rates of hospital interventions, so they were fully aware of the mortality data at all times.

It is difficult to imagine a legitimate reason why a professional statistician would deliberately use the wrong statistics for comparison when the right statistics were available and actually in his possession. It seems to me that the only possible explanation is that they knew all along that their study showed that homebirth has an increased risk of preventable neonatal death compared to hospital birth.

Regardless of reasoning or excuses, the bottom line is stark: rather than showing that homebirth with an American homebirth midwife is safe, the Johnson and Daviss study actually showed that homebirth with a CPM in 2000 had nearly triple the neonatal death rate of moderate to low risk hospital birth in 2000.