Algorithm biased against black people is being widely used in US hospitals to predict health risks, say researchers

The common commercial tool 'underestimates' the health needs of black patients and wrongly determines that black patients are healthier than equally sick whites


                            Algorithm biased against black people is being widely used in US hospitals to predict health risks, say researchers

A healthcare algorithm used nationally in the US by health insurers to make critical healthcare decisions for millions of Americans each year has been found to have a significant racial bias in its predictions of the health risks of black patients.

The widely used commercial tool 'underestimates' the health needs of black patients: it determines that black patients are healthier than equally sick whites, thus reducing the number of black patients who are identified as requiring extra care, according to researchers.

When scientists from UC Berkeley and Chicago Booth corrected the bias, they found that the percentage of black patients receiving additional help jumped from 17.7 to 46.5 percent.

Computer algorithms that help us in decision-making are everywhere—from predicting crimes to its use in robotics and healthcare. But they are not perfect. And the research team identified a flaw in a computer algorithm used in a hosiptal in the US. Results from the algorithm are fed into high-risk health care management programs, which provide additional attention and resources for high-risk patients. The team suspected that the algorithm was biased, which could mean that was favoring whites over blacks, regardless of the health status.

So the researchers aimed to investigate the bias and its source. They joined forces with an academic hospital, which was using a risk-based algorithm on 43,539 white patients and 6,079 black patients enrolled in the hospital. They then took results or risk scores from the algorithm and compared it with more direct measures of a patient's health, including number of chronic illnesses and other biomarkers.

The algorithm was inclined to show high risk scores for whites. In contrast, direct results showed that blacks had significantly poorer health than their white counterparts.

Scientists also realized that these programs gauged risks in people by factoring in a faulty parameter called health costs, the source of the bias. "Instead of being trained to find the sickest, in a physiological sense, [these algorithms] ended up being trained to find the sickest in the sense of those whom we spend the most money on. And there are systemic racial differences in health care in who we spend money on," says Sendhil Mullainathan, the Roman Family University Professor of Computation and Behavioral Science at Chicago Booth and senior author of the study." 

By correcting for the health disparities between blacks and whites, the researchers found that the algorithm threw up different results: the percentage of black people enrolled in the program jumped from 17.7 percent to 46.5 percent.

Bias in the algorithm, the scientists say, can be traced back to the inherent bias within us.

Commenting on this, Ziad Obermeyer, acting associate professor of health policy and management at UC Berkeley and lead author of the paper, says, "Algorithms can do terrible things, or algorithms can do wonderful things. Which one of those things they do is basically up to us. We make so many choices when we train an algorithm that feel technical and small. But these choices make the difference between an algorithm that's good or bad, biased or unbiased. So, it's often very understandable when we end up with algorithms that don't do what we want them to do, because those choices are hard."

But such flaws can be fixed. "Algorithms by themselves are neither good nor bad. It is merely a question of taking care in how they are built. In this case, the problem is eminently fixable—and at least one manufacturer appears to be working on a fix. We would encourage others to do so," says Mullainathan.

If you have a news scoop or an interesting story for us, please reach out at (323) 421-7514