YOU might expect that science, particularly American science, would be colour-blind. Though fewer people from some of the country's ethnic minorities are scientists than the proportions of those minorities in the population suggest should be the case, once someone has got bench space in a laboratory, he might reasonably expect to be treated on merit and nothing else.

Unfortunately, a study just published in Science by Donna Ginther of the University of Kansas suggests that is not true. Dr Ginther, who was working on behalf of America's National Institutes of Health (NIH), looked at the pattern of research grants awarded by the NIH and found that race matters a lot. Moreover, it is not just a question of white supremacy. Asian and Hispanic scientists do just as well as white ones. Black scientists, however, do badly.

Dr Ginther and her colleagues analysed grants awarded by the NIH between 2000 and 2006, and correlated this information with the self-reported race of more than 40,000 applicants. Their results show that the chance of a black scientist receiving a grant was 17%. For Asians, Hispanics and whites the number was between 26% and 29%. Even when these figures were adjusted to take into account applicants' prior education, awards, employment history and publications, a gap of ten percentage points remained.

This bias appears to arise in the NIH's peer-review mechanism. Each application is reviewed by a panel of experts. These panels assign scores to about half the applications they receive (the others are rejected outright). Scored applications are then considered for grants by the various institutes that make up the NIH. The race of the applicant is not divulged to the panel. However, Dr Ginther found that applications from black scientists were less likely to be awarded a score than those from similarly qualified scientists of other races, and when they were awarded a score, that score was lower than the scores given to applicants of other races.

One possible explanation is that review panels are inferring applicants' ethnic origins from their names, or the institutions they attended as students. Consciously or not, the reviewers may then be awarding less merit to those from people with “black-sounding” names, or who were educated at universities whose students are predominantly black. Indeed, a similar bias has been found in those recruiting for jobs in the commercial world. One well-known study, published in 2003 by researchers at the Massachusetts Institute of Technology and the University of Chicago, found that fictitious CVs with stereotypically white names elicited 50% more offers of interviews than did CVs with black names, even when the applicants' stated qualifications were identical.

Another possible explanation is social networking. It is in the nature of groups of experts (which is precisely what peer-review panels are) to know both each other and each other's most promising acolytes. Applicants outside this charmed circle might have less chance of favourable consideration. If the charmed circle itself were racially unrepresentative (if professors unconsciously preferred graduate students of their own race, for example), those excluded from the network because their racial group was under-represented in the first place would find it harder to break in.

Though Dr Ginther's results are troubling, it is to the NIH's credit that it has published her findings. The agency is also starting a programme intended to alter the composition of the review panels, and—appropriately for a scientific body—will conduct experiments to see whether excising potential racial cues from applications changes outcomes. Other agencies, and not just in America, should pay strict attention to all this, and ask themselves if they, too, are failing people of particular races. Such discrimination is not only disgraceful, but also a stupid waste of talent.