当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Decision-makers Processing of AI Algorithmic Advice: Automation Bias versus Selective Adherence
arXiv - CS - Human-Computer Interaction Pub Date : 2021-03-03 , DOI: arxiv-2103.02381
Saar Alon-Barkat, Madalina Busuioc

Artificial intelligence algorithms are increasingly adopted as decisional aides by public organisations, with the promise of overcoming biases of human decision-makers. At the same time, the use of algorithms may introduce new biases in the human-algorithm interaction. A key concern emerging from psychology studies regards human overreliance on algorithmic advice even in the face of warning signals and contradictory information from other sources (automation bias). A second concern regards decision-makers inclination to selectively adopt algorithmic advice when it matches their pre-existing beliefs and stereotypes (selective adherence). To date, we lack rigorous empirical evidence about the prevalence of these biases in a public sector context. We assess these via two pre-registered experimental studies (N=1,509), simulating the use of algorithmic advice in decisions pertaining to the employment of school teachers in the Netherlands. In study 1, we test automation bias by exploring participants adherence to a prediction of teachers performance, which contradicts additional evidence, while comparing between two types of predictions: algorithmic v. human-expert. We do not find evidence for automation bias. In study 2, we replicate these findings, and we also test selective adherence by manipulating the teachers ethnic background. We find a propensity for adherence when the advice predicts low performance for a teacher of a negatively stereotyped ethnic minority, with no significant differences between algorithmic and human advice. Overall, our findings of selective, biased adherence belie the promise of neutrality that has propelled algorithm use in the public sector.

中文翻译:

人工智能算法建议的决策者处理:自动化偏差与选择性坚持

人工智能算法越来越多地被公共组织用作决策助手,并有望克服人类决策者的偏见。同时,算法的使用可能在人机交互中引入新的偏差。心理学研究的一个主要关注点是,即使面对来自其他来源的警告信号和矛盾信息(自动化偏差),人类也过于依赖算法建议。第二个关注点是决策者倾向于在与他们先前存在的信念和刻板印象(选择性依从性)相匹配时选择采用算法建议。迄今为止,我们缺乏关于在公共部门中这些偏见普遍存在的严格的经验证据。我们通过两项预先注册的实验研究(N = 1,509)评估了这些数据,在与荷兰学校教师的雇用有关的决策中模拟算法建议的使用。在研究1中,我们通过探索参与者对教师表现的预测的坚持程度来测试自动化偏见,这与其他证据相矛盾,同时在两种类型的预测之间进行比较:算法对人类专家。我们找不到证据表明存在自动化偏见。在研究2中,我们复制了这些发现,并通过操纵教师的种族背景来测试选择性的依从性。当建议预测负面定型少数族裔的老师的表现很低,而算法建议和人工建议之间没有显着差异时,我们会发现坚持的倾向。总体而言,我们对选择性,
更新日期:2021-03-04
down
wechat
bug