Not so much:

Organizations in science and elsewhere often rely on committees of experts to make important decisions, such as evaluating early-stage projects and ideas. However, very little is known about how experts influence each others’ opinions, and how that influence affects final evaluations. Here, we use a field experiment in scientific peer review to examine experts’ susceptibility to the opinions of others. We recruited 277 faculty members at seven US medical schools to evaluate 47 early stage research proposals in biomedicine. In our experiment, evaluators: (1) completed independent reviews of research ideas, (2) received (artificial) scores attributed to anonymous “other reviewers” from the same or a different discipline, and (3) decided whether to update their initial scores. Evaluators did not meet in person and were not otherwise aware of each other. We find that, even in a completely anonymous setting and controlling for a range of career factors, women updated their scores 13% more often than men, while very highly cited “superstar” reviewers updated 24% less often than others. Women in male-dominated subfields were particularly likely to update, updating 8% more for every 10% decrease in subfield representation. Very low scores were particularly “sticky” and seldom updated upward, suggesting a possible source of conservatism in evaluation. These systematic differences in how world-class experts respond to external opinions can lead to substantial gender and status disparities in whose opinion ultimately matters in collective expert judgment.