当前位置: X-MOL 学术J. Public Adm. Res. Theory › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Human-AI Interactions in Public Sector Decision-Making: ‘Automation Bias’ and ‘Selective Adherence’ to Algorithmic Advice
Journal of Public Administration Research and Theory ( IF 5.2 ) Pub Date : 2022-02-07 , DOI: 10.1093/jopart/muac007
Saar Alon-Barkat 1 , Madalina Busuioc 2
Affiliation  

Artificial intelligence algorithms are increasingly adopted as decisional aides by public bodies, with the promise of overcoming biases of human decision-makers. At the same time, they may introduce new biases in the human-algorithm interaction. Drawing on psychology and public administration literatures, we investigate two key biases: overreliance on algorithmic advice even in the face of ‘warning signals’ from other sources (automation bias), and selective adoption of algorithmic advice when this corresponds to stereotypes (selective adherence). We assess these via three experimental studies conducted in the Netherlands: In study 1 (N=605), we test automation bias by exploring participants’ adherence to an algorithmic prediction compared to an equivalent human-expert prediction. We do not find evidence for automation bias. In study 2 (N=904), we replicate these findings, and also test selective adherence. We find a stronger propensity for adherence when the advice is aligned with group stereotypes, with no significant differences between algorithmic and human-expert advice. Studies 1 and 2 were conducted among citizens in a context where citizens can act as decision-makers. In study 3 (N=1,345), we replicate our design with a sample of civil servants. This study was conducted shortly after a major scandal involving public authorities’ reliance on an algorithm with discriminatory outcomes (the “childcare benefits scandal”). The scandal is itself illustrative of our theory and patterns diagnosed empirically in our experiment, yet in our study 3, while supporting our prior findings as to automation bias, we do not find patterns of selective adherence. We suggest this is driven by bureaucrats’ enhanced awareness of discrimination and algorithmic biases in the aftermath of the scandal. We discuss the implications of our findings for public sector decision-making in the age of automation. Overall, our study speaks to potential negative effects of automation of the administrative state for already vulnerable and disadvantaged citizens.

中文翻译:

公共部门决策中的人机交互:对算法建议的“自动化偏差”和“选择性遵守”

人工智能算法越来越多地被公共机构用作决策助手,有望克服人类决策者的偏见。同时,它们可能会在人机交互中引入新的偏见。借鉴心理学和公共管理文献,我们调查了两个关键的偏见:即使面对来自其他来源的“警告信号”(自动化偏见),也过度依赖算法建议,以及在对应于刻板印象时选择性地采用算法建议(选择性遵守) . 我们通过在荷兰进行的三项实验研究来评估这些:在研究 1 (N=605) 中,我们通过探索参与者对算法预测的遵守情况与等效的人类专家预测相比来测试自动化偏差。我们没有发现自动化偏见的证据。在研究 2 (N=904) 中,我们复制了这些发现,并测试了选择性依从性。当建议与群体刻板印象一致时,我们发现更倾向于坚持,算法建议和人类专家建议之间没有显着差异。研究 1 和 2 是在公民可以充当决策者的背景下在公民中进行的。在研究 3 (N=1,345) 中,我们用公务员样本复制了我们的设计。这项研究是在涉及公共当局依赖具有歧视性结果的算法的重大丑闻(“儿童保育福利丑闻”)之后不久进行的。丑闻本身就说明了我们在实验中根据经验诊断出的理论和模式,但在我们的研究 3 中,虽然支持我们先前关于自动化偏差的发现,但我们没有发现选择性依从模式。我们认为这是由于官僚在丑闻之后增强了对歧视和算法偏见的认识。我们讨论了我们的研究结果对自动化时代公共部门决策的影响。总体而言,我们的研究谈到了行政国家自动化对已经脆弱和处于不利地位的公民的潜在负面影响。
更新日期:2022-02-07
down
wechat
bug