当前位置: X-MOL 学术arXiv.cs.CL › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unequal Representations: Analyzing Intersectional Biases in Word Embeddings Using Representational Similarity Analysis
arXiv - CS - Computation and Language Pub Date : 2020-11-24 , DOI: arxiv-2011.12086
Michael A. Lepori

We present a new approach for detecting human-like social biases in word embeddings using representational similarity analysis. Specifically, we probe contextualized and non-contextualized embeddings for evidence of intersectional biases against Black women. We show that these embeddings represent Black women as simultaneously less feminine than White women, and less Black than Black men. This finding aligns with intersectionality theory, which argues that multiple identity categories (such as race or sex) layer on top of each other in order to create unique modes of discrimination that are not shared by any individual category.

中文翻译:

不相等的表示形式:使用表示相似性分析来分析单词嵌入中的交集偏倚

我们提出了一种新的方法,用于使用代表性相似性分析来检测单词嵌入中的类人社会偏见。具体来说,我们探讨了上下文化和非上下文化的嵌入,以发现针对黑人女性的交叉偏见。我们表明,这些嵌入代表黑人女性的女性同时比白人女性少,而黑人比黑人少。这一发现与交叉性理论相吻合,该理论认为,多个身份类别(例如种族或性别)彼此叠置,以创建独特的歧视模式,任何类别都不共享。
更新日期:2020-11-25
down
wechat
bug