当前位置: X-MOL 学术arXiv.cs.CY › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hurtful Words: Quantifying Biases in Clinical Contextual Word Embeddings
arXiv - CS - Computers and Society Pub Date : 2020-03-11 , DOI: arxiv-2003.11515
Haoran Zhang, Amy X. Lu, Mohamed Abdalla, Matthew McDermott, Marzyeh Ghassemi

In this work, we examine the extent to which embeddings may encode marginalized populations differently, and how this may lead to a perpetuation of biases and worsened performance on clinical tasks. We pretrain deep embedding models (BERT) on medical notes from the MIMIC-III hospital dataset, and quantify potential disparities using two approaches. First, we identify dangerous latent relationships that are captured by the contextual word embeddings using a fill-in-the-blank method with text from real clinical notes and a log probability bias score quantification. Second, we evaluate performance gaps across different definitions of fairness on over 50 downstream clinical prediction tasks that include detection of acute and chronic conditions. We find that classifiers trained from BERT representations exhibit statistically significant differences in performance, often favoring the majority group with regards to gender, language, ethnicity, and insurance status. Finally, we explore shortcomings of using adversarial debiasing to obfuscate subgroup information in contextual word embeddings, and recommend best practices for such deep embedding models in clinical settings.

中文翻译:

有害词:量化临床上下文词嵌入中的偏差

在这项工作中,我们研究了嵌入可能对边缘化人群进行不同编码的程度,以及这如何导致偏见的持续存在和临床任务的表现恶化。我们在来自 MIMIC-III 医院数据集的医学笔记上预训练深度嵌入模型 (BERT),并使用两种方法量化潜在差异。首先,我们使用来自真实临床笔记的文本和对数概率偏差评分量化的填空方法来识别由上下文词嵌入捕获的危险潜在关系。其次,我们评估了 50 多个下游临床预测任务(包括检测急性和慢性疾病)的不同公平性定义之间的性能差距。我们发现从 BERT 表示训练的分类器在性能上表现出统计学上的显着差异,通常在性别、语言、种族和保险状况方面有利于多数群体。最后,我们探讨了使用对抗性去偏差来混淆上下文词嵌入中的子组信息的缺点,并为这种深度嵌入模型在临床环境中推荐最佳实践。
更新日期:2020-03-26
down
wechat
bug