Medical Algorithm Technology and Its Fatal Flaw: An Implicit Bias

August 17, 2020

// By Ndome Yvonne Essoka //

In 2019, Optum’s medical algorithms were flagged for underestimating the needs of the sickest Black patients in its system. Hospitals and insurers all over the country use these algorithms daily to make healthcare decisions for hundreds of millions of Americans in the U.S. each year. The error highlighted flaws within healthcare technology that perpetuate medical racism and contribute to current health inequities and disparities.

How can a technology created to save patients be inherently racist?

The simple yet confusing answer to this conundrum is that these algorithms bear implicit biases. Optum’s algorithm uses a rule-based system, but this rule-based review is based on historical spending, which is skewed, as we’ll see. Optum’s technology is not an outlier; it is part of a broader pattern of medical algorithms that define contributing factors too narrowly, without considering the big picture.

Algorithm developers typically do not examine factors that go along with race and cost of care in the U.S. such as access to primary care, socioeconomic status, or discrimination. These missing pieces may be solved by including other proxies such as class and poverty, which are often stratified through race.

In 2016, for example, the household income for Black Americans was $39,500 compared with $65,000 for non-Hispanic White Americans. Explanations for this include residential segregation, redlining, discrimination, and differences in family structure, to name a few.


This content is only available to members.

Please log in.

Not a member yet?

Start a free 7-day trial membership to get instant access.


Log in below to access this content: