It is not clear why facial recognition algorithms perform differently on different racial groups, researchers say. One reason may be that the algorithms, which learn to recognize patterns in faces by looking at large numbers of them, are not being trained on a diverse enough array of photographs.

But Kevin Bowyer, a Notre Dame computer scientist, said that was not the case for a study he recently published. Nor is it certain that skin tone is the culprit: Facial structure, hairstyles and other factors may contribute.

In Dr. Bowyer’s experiments, the recognition algorithms could achieve the same degree of accuracy for white and black Americans, but only when the algorithm was tuned to a cutoff, say, of no more than one in 10,000 false matches for the two separate groups. Given that the norm is to use the same threshold for everybody, “those programs are seeing a higher false match rate for the population of African-Americans,” Dr. Bowyer said.

A dual-threshold system would not necessarily solve the problem, he added. That would require law enforcement authorities to make a judgment about each individual’s race and apply the appropriately tweaked facial recognition software — which would in turn introduce human bias.

“Technically, it’s a very reasonable thing to say to do,” Dr. Bowyer said. “But how do you defend it, and once you put that knob out there for police to use, how do you make sure it’s not misused?”