当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Perceptions of Fairness and Trustworthiness Based on Explanations in Human vs. Automated Decision-Making
arXiv - CS - Human-Computer Interaction Pub Date : 2021-09-13 , DOI: arxiv-2109.05792
Jakob Schoeffer, Yvette Machowski, Niklas Kuehl

Automated decision systems (ADS) have become ubiquitous in many high-stakes domains. Those systems typically involve sophisticated yet opaque artificial intelligence (AI) techniques that seldom allow for full comprehension of their inner workings, particularly for affected individuals. As a result, ADS are prone to deficient oversight and calibration, which can lead to undesirable (e.g., unfair) outcomes. In this work, we conduct an online study with 200 participants to examine people's perceptions of fairness and trustworthiness towards ADS in comparison to a scenario where a human instead of an ADS makes a high-stakes decision -- and we provide thorough identical explanations regarding decisions in both cases. Surprisingly, we find that people perceive ADS as fairer than human decision-makers. Our analyses also suggest that people's AI literacy affects their perceptions, indicating that people with higher AI literacy favor ADS more strongly over human decision-makers, whereas low-AI-literacy people exhibit no significant differences in their perceptions.

中文翻译:

基于人类与自动决策中的解释的公平性和可信度感知

自动决策系统 (ADS) 在许多高风险领域中无处不在。这些系统通常涉及复杂但不透明的人工智能 (AI) 技术,很少允许完全理解其内部运作,尤其是对于受影响的个人。因此,ADS 易于缺乏监督和校准,这可能导致不良(例如,不公平)结果。在这项工作中,我们与 200 名参与者进行了一项在线研究,以与人类而不是 ADS 做出高风险决策的场景进行比较,以检查人们对 ADS 的公平性和可信度的看法——我们提供了关于决策的彻底相同的解释在这两种情况下。令人惊讶的是,我们发现人们认为 ADS 比人类决策者更公平。我们的分析还表明,人们的
更新日期:2021-09-14
down
wechat
bug