当前位置: X-MOL 学术Medicine, Health Care and Philosophy › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards a pragmatist dealing with algorithmic bias in medical machine learning
Medicine, Health Care and Philosophy ( IF 2.3 ) Pub Date : 2021-03-13 , DOI: 10.1007/s11019-021-10008-5
Georg Starke 1 , Eva De Clercq 1 , Bernice S Elger 1, 2
Affiliation  

Machine Learning (ML) is on the rise in medicine, promising improved diagnostic, therapeutic and prognostic clinical tools. While these technological innovations are bound to transform health care, they also bring new ethical concerns to the forefront. One particularly elusive challenge regards discriminatory algorithmic judgements based on biases inherent in the training data. A common line of reasoning distinguishes between justified differential treatments that mirror true disparities between socially salient groups, and unjustified biases which do not, leading to misdiagnosis and erroneous treatment. In the curation of training data this strategy runs into severe problems though, since distinguishing between the two can be next to impossible. We thus plead for a pragmatist dealing with algorithmic bias in healthcare environments. By recurring to a recent reformulation of William James’s pragmatist understanding of truth, we recommend that, instead of aiming at a supposedly objective truth, outcome-based therapeutic usefulness should serve as the guiding principle for assessing ML applications in medicine.



中文翻译:

走向实用主义者,处理医学机器学习中的算法偏差

机器学习 (ML) 在医学领域呈上升趋势,有望改进诊断、治疗和预后临床工具。虽然这些技术创新必将改变医疗保健,但它们也带来了新的伦理问题。一项特别难以捉摸的挑战是基于训练数据中固有偏差的歧视性算法判断。一个共同的推理路线区分了反映社会突出群体之间真实差异的合理差别待遇和不合理的偏见,后者导致误诊和错误治疗。然而,在训练数据的管理中,这种策略遇到了严重的问题,因为区分两者几乎是不可能的。因此,我们恳求一个实用主义者来处理医疗保健环境中的算法偏见。

更新日期:2021-03-15
down
wechat
bug