Artificial intelligence is beginning to meet (and sometimes exceed) assessments by doctors in various clinical situations. A.I. can now diagnose skin cancer like dermatologists, seizures like neurologists, and diabetic retinopathy like ophthalmologists. Algorithms are being developed to predict which patients will get diarrhea or end up in the ICU, and the FDA recently approved the first machine learning algorithm to measure how much blood flows through the heart — a tedious, time-consuming calculation traditionally done by cardiologists.

It’s enough to make doctors like myself wonder why we spent a decade in medical training learning the art of diagnosis and treatment.

There are many questions about whether A.I. actually works in medicine, and where it works: can it pick up pneumonia, detect cancer, predict death? But those questions focus on the technical, not the ethical. And in a health system riddled with inequity, we have to ask: Could the use of A.I. in medicine worsen health disparities?

There are at least three reasons to believe it might.

The first is a training problem. A.I. must learn to diagnose disease on large data sets, and if that data doesn’t include enough patients from a particular background, it won’t be as reliable for them. Evidence from other fields suggests this isn’t just a theoretical concern. A recent study found that some facial recognition programs incorrectly classify less than 1 percent of light-skinned men but more than one-third of dark-skinned women. What happens when we rely on such algorithms to diagnose melanoma on light versus dark skin?