Every week science journalists get a bunch of emails from various Respectable Scientific Journals telling us, in advance, what articles those journals are going to publish. When I started in this game, these tables of contents came by fax; today, in the future, they're downloadable PDFs. The quo for all this quid is that we agree not to publish anything until a set time and day.

It’s called an embargo, and it is in some senses the anticlimax of a long story—the story of a scientific discovery. Sure, journalists might focus on the eureka moment or the fascinating details of the methods some scientist used. Massive gravity interferometers! Drilling into Earth’s crust! Robot spaceship studies a comet! But often, implicit in these kind of stories is a less pulse-pounding headline: Article Published.

That doesn't mean it's not news, or not important, or wrong. No! Quite the opposite. These are the atoms from which we humans assemble molecules of understanding. A peer-reviewed journal article is the way scientists say we found out a thing, and perhaps more critically here’s our data and our methods so you can see why we think it’s true. “Peer review” means that experts have read that article, commented on it, and assented to its publication.

But that said, the rigamarole around scientific publishing—from submitting to a journal, to having relevant scientists review and approve the work, to publishing on a set day—is a social construction. This is the plodding, collaborative-but-combative dynamic that turns the labor of science into, well, Science. And Cell, Nature, the New England Journal of Medicine, and thousands of other journals.

I bring all this up because earlier this week I got advance word about an article describing, ironically, how this entire system is crumbling at the edges. It was embargoed for Wednesday morning, which means I missed it. It made the whooshing sound that Douglas Adams onomatopoetically ascribed to deadlines.

If you believe this new paper, though, that's totally OK. In 1990 physicists began sharing drafts of their articles before publication and peer review; as the internet expanded, so too did this server for “preprints,” called the ArXiv. (That’s not an X. It’s the Greek letter Chi, pronounced “kai.” Get it?) Today the ArXiv hosts more than 1.3 million papers in physics, math, astronomy, and other hard sciences. In 2013, the life sciences got preprinty too, when Cold Spring Harbor Lab started hosting the BioRxiv. (say “bio-archive;” not my fault). Since then, prepublication sharing of articles has taken off like a jet racing for altitude over a storm.

But not for everyone. Anecdotally, researchers have understood for years that scientists in some fields were more likely to share their results, prepublication—at conferences, socially, and via preprint servers—than others. No one really knew why, or who.

The paper I got an email about on Sunday (but am only allowed to tell you about as of today) describes the results of a survey of more than 7,000 working research scientists from nine different major fields. According to that survey, three core features of a given scientific discipline determine whether its adherents are likely to post all their data on a slide at a conference, or post it on a preprint server: norms within the field (that is, the traditions passed on by colleagues and teachers), the overall level of competitiveness in the field, and the potential for commercialization of new results.

The stakes of sharing are complicated. On the plus side, you get potential collaborators, and people who can extend your work. On the minus side, they might scoop you—solving the problem you’ve brought up before you can, and thereby grabbing all the kudos, grants, Nobel prizes, and so on. “One can’t clearly say whether prepublication disclosure is good or bad,” says Jerry Thursby, an economist at Georgia Tech and one of the authors of the study. “If you reduce the size of the prize people don’t work as hard, but you want people disclosing early so others can build upon that.”