This year, the institute decided to conduct a double-blind review of Hubble proposals, which hid nearly all information about applicants, including gender, from reviewers. Of the 351 male-led proposals, 28 were picked. Of the 138 female-led proposals, 12 were chosen. That translates into an 8.7 percent success rate for female researchers, and 8 percent for male researchers.

Under the new review system, the disparity that Hubble’s decision-makers had seen year after year had disappeared.

Priyamvada Natarajan, a theoretical physicist at Yale who led the effort, said she was surprised at the outcome. “I was ready to see a small change, but not complete parity,” she said.

But she wasn’t surprised that the years-long pattern had been broken. Research has found ample evidence that men and women are evaluated differently in the same settings, and the Hubble program is no different, she said.

Read: I spent two years trying to fix the gender imbalance in my stories

“I firmly believe that conscious and unconscious bias both operate quite strongly in these kinds of reviews,” Natarajan said. “They’re not entirely objective.”

Last year, the Space Telescope Science Institute brought in some outside researchers to sit it on reviewer discussions and evaluate the process. They reported that nearly half of all discussions included some focus on the applicants rather than the science in their proposals. “He is very well qualified,” one reviewer said. “My group has benefited a lot from previous work from this team,” said another. To the outside consultants, this process wasn’t objective at all. They recommended that the institute implement a fully anonymous review system.

The new system presented reviewers with applications without any names or identifying details. Outside observers were brought in to listen to their discussions again. This time, the tenor of the deliberations was different. “It was really noticeable how the discussion were really focused much more on the science,” said Natarajan, who has participated in the discussions of both settings. Some of the reviewers said it was “almost liberating” to focus on the science of the proposals, and not on the people who wrote them.

If they wanted, reviewers could learn about the applicants after they made their final decisions. The institute had applicants submit separate documents detailing their backgrounds and expertise, just in case reviewers wanted to make sure that the team could, in fact, execute their proposals. Natarajan said the majority of reviewers didn’t seek out those documents. “They were like, it doesn’t matter, we are confident that we picked the best science,” she said.

Natarajan cautions that it’s too early to determine whether gender biases are the only biases that played a role in the process and, if they did, by how much. The review process wasn’t a controlled experiment. “We have to continue this, and it’ll take time to see whether these trends hold up,” she said.