Dr. Oransky and Mr. Marcus are partisans who editorialize sharply against poor oversight and vague retraction notices. But their focus on evidence over accusations distinguishes them from watchdog forerunners who sometimes came off as ad hominem cranks. Last year, their site won a $400,000 grant from the John D. and Catherine T. MacArthur Foundation, to build out their database, and they plan to work with Dr. Nosek to manage the data side.

Their data already tell a story.

The blog has charted a 20 to 25 percent increase in retractions across some 10,000 medical and science journals in the past five years: 500 to 600 a year today from 400 in 2010. (The number in 2001 was 40, according to previous research.) The primary causes of this surge are far from clear. The number of papers published is higher than ever, and journals have proliferated, Dr. Oransky and other experts said. New tools for detecting misconduct, like plagiarism-sifting software, are widely available, so there’s reason to suspect that the surge is a simple product of better detection and larger volume.

Still, the pressure to publish attention-grabbing findings is stronger than ever, these experts said — and so is the ability to “borrow” and digitally massage data. Retraction Watch’s records suggest that about a third of retractions are because of errors, like tainted samples or mistakes in statistics, and about two-thirds are because of misconduct or suspicions of misconduct.

The most common reason for retraction because of misconduct is image manipulation, usually of figures or diagrams, a form of deliberate data massaging or, in some cases, straight plagiarism. In their dissection of the LaCour-Green paper, the two graduate students — David Broockman, now an assistant professor at Stanford, and Joshua Kalla, at California-Berkeley — found that a central figure in Mr. LaCour’s analysis looked nearly identical to one from another study. This and other concerns led Dr. Green, who had not seen any original data, to request a retraction. (Mr. LaCour has denied borrowing anything.)

Data massaging can take many forms. It can mean simply excluding “outliers” — unusually high or low data points — from an analysis to generate findings that more strongly support the hypothesis. It also includes moving the goal posts: that is, mining the data for results first, and then writing the paper as if the experiment had been an attempt to find just those effects. “You have exploratory findings, and you’re pitching them as ‘I knew this all along,’ as confirmatory,” Dr. Nosek said.

Image The increasing challenges to the veracity of scientists’ work gained widespread attention recently when a study by Michael LaCour on the effect of political canvassing on opinions of same-sex marriage was questioned and ultimately retracted.

The second leading cause is plagiarizing text, followed by republishing — presenting the same results in two or more journals.