We found these algorithms have a built-in racial bias. At similar levels of sickness, black patients were deemed to be at lower risk than white patients. The magnitude of the distortion was immense: Eliminating the algorithmic bias would more than double the number of black patients who would receive extra help. The problem lay in a subtle engineering choice: to measure “sickness,” they used the most readily available data, health care expenditures. But because society spends less on black patients than equally sick white ones, the algorithm understated the black patients’ true needs.

One difference between these studies is the work needed to uncover bias.

Our 2004 résumé study resembled a complex covert operation more than traditional academic research. We created a large bank of fictitious résumés and scraped help wanted ads every day. We faxed (yes, the study was that long ago) résumés for each job offer, and established phone numbers with voice mail. Then we waited for prospective employers to call back.

This went on for months — all before we had even one data point to analyze. Pinpointing discriminatory behavior by a particular group of people — in this case, hiring managers — is often very hard.

By contrast, uncovering algorithmic discrimination was far more straightforward. This was a statistical exercise — the equivalent of asking the algorithm “what would you do with this patient?” hundreds of thousands of times, and mapping out the racial differences. The work was technical and rote, requiring neither stealth nor resourcefulness.

Humans are inscrutable in a way that algorithms are not. Our explanations for our behavior are shifting and constructed after the fact. To measure racial discrimination by people, we must create controlled circumstances in the real world where only race differs. For an algorithm, we can create equally controlled just by feeding it the right data and observing its behavior.

Algorithms and humans also differ on what can be done about bias once it is found.

With our résumé study, fixing the problem has proved to be extremely difficult. For one, having found bias on average didn’t tell us that any one firm was at fault, though recent research is finding clever ways to detect discrimination.

Another problem is more fundamental. Changing people’s hearts and minds is no simple matter. For example, implicit bias training appears to have a modest impact at best.