当前位置: X-MOL 学术Perspect. Psychol. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Blinding to Circumvent Human Biases: Deliberate Ignorance in Humans, Institutions, and Machines.
Perspectives on Psychological Science ( IF 12.6 ) Pub Date : 2023-09-05 , DOI: 10.1177/17456916231188052
Ralph Hertwig 1 , Stefan M Herzog 1 , Anastasia Kozyreva 1
Affiliation  

Inequalities and injustices are thorny issues in liberal societies, manifesting in forms such as the gender-pay gap; sentencing discrepancies among Black, Hispanic, and White defendants; and unequal medical-resource distribution across ethnicities. One cause of these inequalities is implicit social bias-unconsciously formed associations between social groups and attributions such as "nurturing," "lazy," or "uneducated." One strategy to counteract implicit and explicit human biases is delegating crucial decisions, such as how to allocate benefits, resources, or opportunities, to algorithms. Algorithms, however, are not necessarily impartial and objective. Although they can detect and mitigate human biases, they can also perpetuate and even amplify existing inequalities and injustices. We explore how a philosophical thought experiment, Rawls's "veil of ignorance," and a psychological phenomenon, deliberate ignorance, can help shield individuals, institutions, and algorithms from biases. We discuss the benefits and drawbacks of methods for shielding human and artificial decision makers from potentially biasing information. We then broaden our discussion beyond the issues of bias and fairness and turn to a research agenda aimed at improving human judgment accuracy with the assistance of algorithms that conceal information that has the potential to undermine performance. Finally, we propose interdisciplinary research questions.

中文翻译:

规避人类偏见的盲目性:对人类、机构和机器的故意无知。

不平等和不公正是自由社会中的棘手问题,表现为性别工资差距等形式;黑人、西班牙裔和白人被告之间的量刑差异;以及不同种族之间的医疗资源分配不平等。这些不平等的原因之一是隐性的社会偏见——社会群体与“有教养”、“懒惰”或“未受过教育”等归因之间无意识地形成的联系。抵消隐性和显性人类偏见的一种策略是将关键决策(例如如何分配利益、资源或机会)委托给算法。然而,算法不一定是公正和客观的。尽管它们可以发现并减轻人类偏见,但它们也可以延续甚至放大现有的不平等和不公正现象。我们探索哲学思想实验(罗尔斯的“无知之幕”)和心理现象(故意无知)如何帮助保护个人、机构和算法免受偏见。我们讨论了保护人类和人工决策者免受潜在偏见信息影响的方法的优点和缺点。然后,我们将讨论范围扩大到偏见和公平问题之外,并转向一项研究议程,旨在借助隐藏可能损害性能的信息的算法来提高人类判断的准确性。最后,我们提出跨学科的研究问题。
更新日期:2023-09-05
down
wechat
bug