The largest ever database of scientific retractions just went live, and it reveals a promising trend: More and more studies are being pulled from the scientific record.

This is a great thing for science.

A retraction means a journal no longer stands by one of its articles. The process can be initiated by a journal or study author after problems are detected, and it typically involves some kind of investigation, and then a statement explaining why claims in the article are being withdrawn or reversed (though some journals are more forthcoming with details than others).

Journal editors don’t take this process lightly and generally issue retractions rarely — even though there’s a growing recognition that published studies are often flawed, sloppy, or downright fraudulent.

Still, an in-depth analysis of the retraction database entries, published in the journal Science, suggests the retraction situation is improving. Before 2000, there were fewer than 100 retractions per year; now there are about 1,000.

“It’s clear the reason we’re seeing more retractions is because a lot more journals want to take this stuff seriously,” argues Ivan Oransky, a doctor, journalist, and professor at New York University who co-created the database with Adam Marcus as a spinoff of their retraction news source, Retraction Watch.

The database brings together more than 18,000 retracted papers and conference abstracts, going as far back as the 1970s. So anyone can now search by author, country, journal, and a bunch of other metrics to see where and how science has gone wrong.

But even though things are getting better, there’s still a lot of room for improvement. “Most of the 12,000 journals that are also cross-indexed in Web of Science have never retracted,” Oransky found. “So most journals haven’t retracted papers.”

Here are a few more surprising findings from the database:

A significant portion of retractions have nothing to do with scientific misconduct

The number of retractions is growing: “In 1997, just 44 journals reported retracting a paper. By 2016, that number had grown more than 10-fold, to 488,” according to Science. However, the analysis found the rate of increase in retractions has also slowed, and there’s a good explanation for that: “In part, that trend reflects a rising denominator: The total number of scientific papers published annually more than doubled from 2003 to 2016.”

“In 1997, just 44 journals reported retracting a paper. By 2016, that number had grown more than 10-fold, to 488,” according to Science. However, the analysis found the rate of increase in retractions has also slowed, and there’s a good explanation for that: “In part, that trend reflects a rising denominator: The total number of scientific papers published annually more than doubled from 2003 to 2016.” Nearly half of retracted papers involved errors or problems with reproducibility — not fraud: The database has a detailed taxonomy of reasons for retractions, and nearly 40 percent of retraction notices on the database did not cite fraud or any misconduct. “Instead, the papers were retracted because of errors, problems with reproducibility, and other issues.”

The database has a detailed taxonomy of reasons for retractions, and nearly 40 percent of retraction notices on the database did not cite fraud or any misconduct. “Instead, the papers were retracted because of errors, problems with reproducibility, and other issues.” China and the US have most retractions: But they also publish the most papers. Interestingly, Romania has a disproportionate number of retractions, Oransky said, because “small bands of paper watchers” who picked apart data in published papers made 100 retractions happen.

But they also publish the most papers. Interestingly, Romania has a disproportionate number of retractions, Oransky said, because “small bands of paper watchers” who picked apart data in published papers made 100 retractions happen. Journals with high-impact actors seem to be doing more retracting: Impact factor is a way to assess a journal’s influence, by looking at the average number of citations a journal’s articles have recently attracted. And it turns out more highly ranked journals are doing more retracting. Case in point: The New England Journal of Medicine, one of the most prestigious medical journals, had 30 entries on the database. Another top-tier journal, Nature Medicine, had 13. The lower-ranked medical journals EBioMedicine and Oncologist, had one and zero entries, respectively.

Impact factor is a way to assess a journal’s influence, by looking at the average number of citations a journal’s articles have recently attracted. And it turns out more highly ranked journals are doing more retracting. Case in point: The New England Journal of Medicine, one of the most prestigious medical journals, had 30 entries on the database. Another top-tier journal, Nature Medicine, had 13. The lower-ranked medical journals EBioMedicine and Oncologist, had one and zero entries, respectively. Retractions remain rare: Only four of every 10,000 papers are now retracted, according to Science’s analysis. “And although the rate roughly doubled from 2003 to 2009, it has remained level since 2012.” That’s in part because more papers are getting published every year, so the denominator keeps growing — but it’s also because retractions are rare.

Stigma is one reason we’re not seeing even more retractions

So why are retractions still rare?

The analysis in Science pins it down to stigma for journals and scientists: “Because a retraction is often considered an indication of wrongdoing, many researchers are understandably sensitive when one of their papers is questioned. That stigma, however, might be leading to practices that undermine efforts to protect the integrity of the scientific literature.”

Journals may be hesitant to pull some of their most revered work, or go after scientists who publish their most frequently cited studies. That’s especially true in smaller countries with cozy networks of scientists.

Retractions also remain difficult to spot; journals and science databases don’t always promote or cross-reference them in ways that anyone coming across a study can see.

But perhaps this is where the retraction database can help: it’s easily searchable, so anyone can use it to cross-reference a paper they’re interested in. It also may help normalize retractions so they are seen as part of the scientific process instead of something to be ashamed of.

“Science is not broken,” Oransky said. “The question is whether the science correction mechanism process is as robust as everybody wants it to be. It’s still not, but we are seeing some signs of improvement.”