Scientists’ grant writing styles vary by gender. That can lead to bias

When describing research in grant proposals, female life scientists use narrower, more topic-specific language than male applicants, resulting in lower reviewer scores, according to a National Bureau of Economic Research working paper published last week investigating health research proposals submitted to the Bill & Melinda Gates Foundation. But the advantage of broad language doesn’t stretch throughout the full scientific process: Proposals that use broad words liberally lead to fewer postfunding publications in top-tier journals—and they aren’t more likely to result in follow-up funding.

“Broad words are something that reviewers and evaluators may be swayed by, but they’re not really reflecting a truly valuable underlying idea,” says Julian Kolev, an assistant professor of strategy and entrepreneurship at Southern Methodist University’s Cox School of Business in Dallas, Texas, and the lead author of the study. It’s “more about style and presentation than the underlying substance.”

He and his co-authors “would be hesitant to recommend that women adopt this language.” Instead, he says, organizations should take a closer look at potential reviewer biases—especially in cases, such as this, where reviewers are favoring language that doesn’t result in better research outcomes. “The narrower and more technical language is probably the right way to think about and evaluate science,” he says.

Kolev’s team examined 6794 proposals submitted to the Gates Foundation by U.S.-based researchers from 2008 to 2017. Reviewers were blind to applicants’ identities but still gave female applicants lower scores overall. The gender gap in reviewer scores remained after controlling for applicants’ career stage, publication record, and other factors—only disappearing after the researchers examined the language used in applicants’ titles and proposal descriptions.

The team classified words as “narrow” if they appeared more often in proposals seeking funding in some topic areas—for example HIV, tuberculosis, malaria—than in others, with those commonly used across topic areas classified as “broad.” This data-driven approach resulted in word classifications that might not have been obvious from the outset. For instance, “community” and “health” were deemed to be narrow words, whereas “bacteria” and “detection” were deemed to be broad words. Reviewers favored proposals with more broad words—and those words were used more often by men.

The findings are interesting, but it’s not clear whether they can be generalized to scientific grant applications more broadly, says Donna Ginther, a professor of economics at the University of Kansas in Lawrence, who has studied disparities in grant funding through the U.S. National Institutes of Health (NIH). She points out that the gender gap in reviewer scores identified in the new study is “at odds with other papers that have found no evidence of gender bias in the peer-review process at NIH,” which funds similar kinds of research.

Differences in the review processes at the Gates Foundation and NIH may help explain this discrepancy. The foundation enlists reviewers representing a variety of disciplines and perspectives and uses “champion-based” review, whereby grants are much more likely to be funded if they’re rated highly by a single reviewer. The less-specialized expertise of reviewers at the Gates Foundation may make them more “susceptible to grantsmanshiplike claims, like ‘I’m going to cure cancer’ as opposed to ‘I’m going to understand how this molecule interacts with a cell,’” Ginther speculates. Those differences in the review process may be leading to greater disadvantage for women, she says.

In a written statement, the Gates Foundation—which asked the researchers to do the study and provided the team with peer-review data and proposals—said that it is “committed to ensuring gender equality” and is “carefully reviewing the results of this study—as well as our own internal data—as part of our ongoing commitment to learning and evolving as an organization.”

With the rise of automated methods for analyzing text, we’re likely to see more of these kinds of studies in the future, says David Markowitz, an assistant professor in the School of Journalism and Communication at the University of Oregon in Eugene, who has done similar research—looking at how technical language and verbal certainty in proposals submitted to the U.S. National Science Foundation lead to more funding. These are exciting times for language researchers, he says, because “we’re able to gather data faster and more systematically than we really ever have been.”

For her part, Ginther hopes that similar linguistic analyses are applied to study other facets of diversity in science. Although she and her colleagues didn’t find significant gender gaps in NIH funding, “that’s not true for race, ethnicity—and understanding those mechanisms would be really helpful as well.”