“Big teams take the current frontier and exploit it,” Evans says. “They wring the towel. They get that last ounce of possibility out of yesterday’s ideas, faster than anyone else. But small teams fuel the future, generating ideas that, if they succeed, will be the source of big-team development.”

Read: The quirks of smallness

That “runs counter to the usual thinking that large teams, which are typically better funded and work on more visible topics, are the ones that push the frontiers of science,” says Staša Milojević, who studies information metrics in science at Indiana University Bloomington. She recently found a similar pattern by analyzing the titles of 20 million scientific papers and showing that bigger teams work on a relatively small slice of topics in a field. Other scientists have made similar points, but what Evans describes as a “Go teams!” attitude still persists. The results of the new analysis should “temper some of that enthusiasm for large teams and demonstrate that there may be a tipping point after which their benefits decline,” says Erin Leahey from the University of Arizona, who has previously written about the “overlooked costs of collaboration.”

The new analysis is based on the ways in which researchers cite past work. For example, when scientists cite Einstein’s groundbreaking 1915 papers on general relativity, they tend not to refer back to the papers that Einstein himself cited. “They see it as a conceptually new direction that’s distinct from the things on which it built,” Evans says. But if scientists “think that something is an incremental improvement, they’ll tell the whole story in the references.” For example, a 1995 paper describing a long-theorized state of matter called a Bose–Einstein condensate is almost always cited together with the papers in which the physicist Satyendra Nath Bose and Einstein predicted the stuff’s existence.

Wu quantified these differences using a “disruption score,” originally created by other researchers to measure the innovativeness of inventions. Wu showed that it works well for scientific research. When ranked by their scores, papers that describe Nobel Prize–winning work appeared in the top 2 percent, as did those chosen by scientists who were asked to name the most disruptive papers in their field. Reviews that summarize earlier work are in the bottom half of the rankings, while the original studies they’re based on appear in the top quarter. It’s a “simple yet brilliant” method, especially because it works across data sources as diverse as papers, patents, and software, says Satyam Mukherjee of the Indian Institutes of Management.

Having tested this score in various ways to show that it’s valid, Wu used it to show that small teams produce markedly more disruptive work than large ones. That’s true even for patents, which are innovative by definition. It’s true for highly cited work and poorly cited work. It’s true in every decade from the 1950s to the 2010s. It’s true in fields ranging from chemistry to social sciences.