Living with Chronic Pain

Racial Biases in Artificial Intelligence Impacting Healthcare

Print
Share
Save

Historically, doctors have often underestimated the pain levels experienced by their Black patients. This misunderstanding has resulted in inadequate treatment and, in some cases, even death. Such underestimation is largely due to implicit biases that healthcare providers may not even be aware they have. Recently, concerns have been raised about whether artificial intelligence (AI) systems might also carry these same implicit biases, potentially leading to further undertreatment in the Black population.

A study published in JAMA found that, at best, AI performed similarly to humans in tests designed to assess racial bias. At worst, AI showed more instances of racial bias than humans. When evaluating pain in Black versus White patients, both humans and AI had comparable levels of racial bias, indicating that AI reflects the same degree of bias as humans in pain assessment. Furthermore, when presented with statements about the influence of race on biology—some of which were true and others false—both humans and AI exhibited implicit bias regarding race.

Technology is often thought to be a neutral entity incapable of showing bias. However, AI is created and encoded with information from human beings, who often hold harmful biases. This means that using AI will not eliminate instances of racial biases in the medical setting, as some people may have hoped.

Additional Sources United Nations: Office of the High Commissioner for Human Rights and the American Bar Association

Did you find this helpful?
You may also like