当前位置: X-MOL 学术Public Administration Review › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explaining Why the Computer Says No: Algorithmic Transparency Affects the Perceived Trustworthiness of Automated Decision-Making
Public Administration Review ( IF 6.1 ) Pub Date : 2022-02-09 , DOI: 10.1111/puar.13483
Stephan Grimmelikhuijsen 1
Affiliation  

Algorithms based on Artificial Intelligence technologies are slowly transforming street-level bureaucracies, yet a lack of algorithmic transparency may jeopardize citizen trust. Based on procedural fairness theory, this article hypothesizes that two core elements of algorithmic transparency (accessibility and explainability) are crucial to strengthening the perceived trustworthiness of street-level decision-making. This is tested in one experimental scenario with low discretion (a denied visa application) and one scenario with high discretion (a suspicion of welfare fraud). The results show that: (1) explainability has a more pronounced effect on trust than the accessibility of the algorithm; (2) the effect of algorithmic transparency not only pertains to trust in the algorithm itself but also—partially—to trust in the human decision-maker; (3) the effects of algorithmic transparency are not robust across decision context. These findings imply that transparency-as-accessibility is insufficient to foster citizen trust. Algorithmic explainability must be addressed to maintain and foster trustworthiness algorithmic decision-making.

中文翻译:

解释为什么计算机说不:算法透明度影响自动决策的感知可信度

基于人工智能技术的算法正在慢慢改变街头官僚机构,但缺乏算法透明度可能会危及公民信任。基于程序公平理论,本文假设算法透明度的两个核心要素(可访问性和可解释性)对于加强街头决策的可信度至关重要。这是在一种低自由裁量权(拒绝签证申请)和一种高自由裁量权(怀疑福利欺诈)的实验场景中进行测试的。结果表明:(1)可解释性对信任的影响比算法的可访问性更显着;(2) 算法透明度的影响不仅与对算法本身的信任有关,而且在一定程度上与对人类决策者的信任有关;(3) 算法透明度的影响在整个决策环境中并不稳健。这些发现表明,透明度即可访问性不足以培养公民信任。必须解决算法的可解释性问题,以维护和促进可信赖的算法决策。
更新日期:2022-02-09
down
wechat
bug