当前位置: X-MOL 学术Nat. Med. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Underdiagnosis bias of artificial intelligence algorithms applied to chest radiographs in under-served patient populations
Nature Medicine ( IF 58.7 ) Pub Date : 2021-12-10 , DOI: 10.1038/s41591-021-01595-0
Laleh Seyyed-Kalantari 1, 2 , Haoran Zhang 3 , Matthew B A McDermott 3 , Irene Y Chen 3 , Marzyeh Ghassemi 2, 3
Affiliation  

Artificial intelligence (AI) systems have increasingly achieved expert-level performance in medical imaging applications. However, there is growing concern that such AI systems may reflect and amplify human bias, and reduce the quality of their performance in historically under-served populations such as female patients, Black patients, or patients of low socioeconomic status. Such biases are especially troubling in the context of underdiagnosis, whereby the AI algorithm would inaccurately label an individual with a disease as healthy, potentially delaying access to care. Here, we examine algorithmic underdiagnosis in chest X-ray pathology classification across three large chest X-ray datasets, as well as one multi-source dataset. We find that classifiers produced using state-of-the-art computer vision techniques consistently and selectively underdiagnosed under-served patient populations and that the underdiagnosis rate was higher for intersectional under-served subpopulations, for example, Hispanic female patients. Deployment of AI systems using medical imaging for disease diagnosis with such biases risks exacerbation of existing care biases and can potentially lead to unequal access to medical treatment, thereby raising ethical concerns for the use of these models in the clinic.



中文翻译:

人工智能算法在服务不足的患者群体中应用于胸片的误诊偏倚

人工智能 (AI) 系统在医学成像应用中越来越多地达到专家级性能。然而,人们越来越担心这种人工智能系统可能会反映和放大人类偏见,并降低其在历史上服务不足的人群(如女性患者、黑人患者或社会经济地位低的患者)中的表现质量。这种偏见在诊断不足的情况下尤其令人不安,因为人工智能算法会错误地将患有疾病的个体标记为健康,从而可能延迟获得护理的机会。在这里,我们检查了三个大型胸部 X 射线数据集以及一个多源数据集的胸部 X 射线病理分类中的算法漏诊。我们发现,使用最先进的计算机视觉技术生成的分类器始终和选择性地漏诊了服务不足的患者群体,并且交叉服务不足的亚群(例如西班牙裔女性患者)的漏诊率更高。部署使用医学成像进行具有此类偏见的疾病诊断的人工智能系统可能会加剧现有的护理偏见,并可能导致获得医疗服务的机会不平等,从而引发对在诊所使用这些模型的伦理问题。

更新日期:2021-12-10
down
wechat
bug