Most people involved in scientific research are well aware of the big three ethical lapses: fabrication, falsification, and plagiarism. These acts are considered to have such a large potential for distorting the scientific record that governments, research institutions, and funding bodies generally have formal procedures to investigate incidents, and formal sanctions for those found to have infringed. But the big three are hardly a complete list of all the problems that can produce misleading results; anything from poor record-keeping to sloppy techniques can cause errors to creep into the scientific literature, and there are rarely formal procedures to deal with them.

That doesn't mean they're not dealt with, however. A survey published by Nature has found that researchers regularly engage in informal interventions with colleagues if they suspect that there's any form of misconduct going on—even if they think the problems are inadvertent.

The survey asked about what its authors term "acts that could corrupt the scientific record," and defined them very broadly to include things like "poor supervision of assistants, carelessness, authorship disputes, failure to follow the rules of science, conflicts of interest, incompetence, and hostile work environments that impact on research quality." To get a sense for how these are dealt with, they looked up several thousand researchers who have received funding from the National Institutes of Health, and asked them to fill out an online survey.

The questions in the survey, as well as the responses of those queried, have been posted in a PDF at the authors' website.

Good news and bad news

The majority of the 2,600 researchers who responded had experienced a case where they suspected scientific errors were occurring—84 percent, in total. The authors ascribe this number, which is much higher than most other estimates, to the loose definition of misconduct that they provided. An alternate explanation might be that the self-selecting group that responded was more did so in part because they were aware of these issues. The authors omitted the 400 or so who had never noticed misconduct from most of their further analysis.

The good news for the scientific community is that, when researchers became aware of potential problems, they were fairly likely to do something about it. Almost two-thirds reported taking some type of action about the issues they noticed. Of the remainder, most felt that either action was already underway, or were too removed from the lab with issues to have a good sense of how to intervene.

Over 30 percent of those who acted went straight to the source, and had a discussion with the person they felt was having troubles. Another eight percent sent a message of concern to that individual (90 percent of these were signed), while 16 percent alerted someone in a position of authority about the trouble.

In about 21 percent of the cases where someone chose to intervene, the issue got bumped up to formal proceedings. Some of these may have been the result of denial on the part of the people involved (19 percent of the responses) or cases where the individuals failed to act at all (another 14 percent). Still, there were some good outcomes; in about 30 percent of the cases, the problem was either corrected or it was recognized that it was too late to do anything about it. One striking number here was that, out of all these instances, only a fraction of a percent turned out to be cases where the worries about problems were unwarranted.

About equal numbers of those polled expressed satisfaction and dissatisfaction with the results. Over half also felt that the incident had either had no effect on their career, or had even enhanced it. Still, that would seem to leave a lot of individuals who were dissatisfied and suffered some form of negative impact from the event.

There are a lot of interesting details in the numbers, as well. For example, many of those who chose to act did so in part because they considered their institutions unlikely to do anything. Those who were satisfied with the outcomes were also more likely to have been in a situation where the problems were inadvertent.

Overall, there are some promising aspects to these results. Scientists clearly feel that their ethics compel them to intervene in cases where the potential to distort the scientific record doesn't rise to the level of actual fraud. And many of these interventions appear to end in a satisfactory manner. But there are clearly still cases where institutions don't take the issues seriously, and the scientists who try to do the right thing feel that they suffer consequences as a result.

There's no obvious way to force institutions to take scientific errors and misconduct seriously. But the institutions that do so may want to consider the evidence that this informal policing of scientific ethics takes place. Providing support and advice on how to manage these situations, which can easily devolve into conflict, could significantly improve the scientific community's ability to police itself.

Nature, 2010. DOI: 10.1038/466438a (About DOIs).