A recent blockbuster study found that software used in healthcare settings systematically provides worse care for black patients than white patients, and two senators want to know what both the industry and regulators are going to do to fix the situation.

Senators Cory Booker (D-N.J.) and Ron Wyden (D-Ore.) on Tuesday issued letters to the Federal Trade Commission, the Centers for Medicare and Medicaid Services (CMS), and the five largest US health insurers asking about bias in the algorithms used to make healthcare decisions.

"In using algorithms, organizations often attempt to remove human flaws and biases from the process," Booker and Wyden wrote. "Unfortunately, both the people who design these complex systems, and the massive sets of data that are used, have many historical and human biases built in. Without very careful consideration, the algorithms they subsequently create can further perpetuate those very biases."

The senators have slightly different questions for each group. From the FTC, Wyden and Booker want to know if there is any regulation or policy being considered to address algorithmic bias in products and services, or at least if the commission has "any ongoing studies or investigations into damages done to consumer welfare by discriminatory and biased algorithms."

The letter to the CMS probes the use of potentially biased algorithms in federal healthcare, including flat-out asking: "What is the agency doing to ensure that algorithmic bias is taken into account?" The senators also ask what other federal agencies and external stakeholders are involved in making those choices and what data the CMS uses to make its decisions.

The last set of letters, basically identical, went to Aetna (CVS), Blue Cross Blue Shield, Cigna, Humana, and UnitedHealth, which among them cover more than 300 million Americans. Those are a bit more pointed, not asking if algorithms are in use but instead how many, what specific decisions they're used to make, and how many people they impact. When insurers put them in place, they ask, do they consult anyone who might be affected by bias, or with outside sources who can vet them?

Poor prediction

The study in question, published in October, dug through about 50,000 patient records from an unnamed "large academic hospital." That hospital is among the many facilities in the country that use software to help identify which patients have the most pressing healthcare needs and map out a care and support plan to address those patients' conditions.

When the researchers conducting the study compared histories of patients assigned similar risk scores by the software, they found that black patients were receiving significantly less care and tended to have significantly worse symptoms or conditions than their white peers.

The algorithm did not explicitly consider race as a factor in making its determinations. Instead, it predicted which patients were likely to cost more. That choice ended up replicating disparities that exist between racial groups in income levels and current healthcare spending levels. Patients with less money to spend on medical care were seen by the software as needing to spend less on medical care, and so their risk scores ended up not matching their actual medical needs or outcomes.

The healthcare industry is far from the only sector in which software can cause harm by unintentionally replicating existing societal biases. Recent investigations of the criminal justice systems of many states and municipalities, for example, have found strong disparities in the risk scores, suggested bail, and suggested sentences assigned to black and white criminal defendants.

New York City Major Bill de Blasio recently created a position for the city government to basically act as a chief algorithm ethics officer, overseeing and trying to reduce bias in the algorithms the city's various departments use to provide services to about 8.6 million residents.