当前位置: X-MOL 学术Soc. Sci. Comput. Rev. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Will Algorithms Blind People? The Effect of Explainable AI and Decision-Makers’ Experience on AI-supported Decision-Making in Government
Social Science Computer Review ( IF 3.0 ) Pub Date : 2020-12-28 , DOI: 10.1177/0894439320980118
Marijn Janssen 1 , Martijn Hartog 1 , Ricardo Matheus 1 , Aaron Yi Ding 1 , George Kuk 2
Affiliation  

Computational artificial intelligence (AI) algorithms are increasingly used to support decision making by governments. Yet algorithms often remain opaque to the decision makers and devoid of clear explanations for the decisions made. In this study, we used an experimental approach to compare decision making in three situations: humans making decisions (1) without any support of algorithms, (2) supported by business rules (BR), and (3) supported by machine learning (ML). Participants were asked to make the correct decisions given various scenarios, while BR and ML algorithms could provide correct or incorrect suggestions to the decision maker. This enabled us to evaluate whether the participants were able to understand the limitations of BR and ML. The experiment shows that algorithms help decision makers to make more correct decisions. The findings suggest that explainable AI combined with experience helps them detect incorrect suggestions made by algorithms. However, even experienced persons were not able to identify all mistakes. Ensuring the ability to understand and traceback decisions are not sufficient for avoiding making incorrect decisions. The findings imply that algorithms should be adopted with care and that selecting the appropriate algorithms for supporting decisions and training of decision makers are key factors in increasing accountability and transparency.



中文翻译:

算法会导致盲人吗?可解释的AI和决策者的经验对政府支持AI的决策的影响

计算人工智能(AI)算法越来越多地用于支持政府的决策。然而,算法通常对于决策者仍然是不透明的,并且缺乏对决策的明确解释。在这项研究中,我们使用一种实验性方法来比较三种情况下的决策:人类做出决策(1)没有任何算法支持,(2)受业务规则(BR)支持,以及(3)受机器学习(ML)支持)。要求参与者在各种情况下做出正确的决定,而BR和ML算法可以为决策者提供正确或不正确的建议。这使我们能够评估参与者是否能够理解BR和ML的局限性。实验表明,算法可以帮助决策者做出更正确的决策。研究结果表明,可解释的AI与经验相结合可帮助他们发现算法提出的错误建议。但是,即使是有经验的人也无法识别所有错误。确保理解和追溯决策的能力不足以避免做出错误的决策。研究结果表明,应谨慎采用算法,选择合适的算法来支持决策和培训决策者是增加问责制和透明度的关键因素。

更新日期:2020-12-28
down
wechat
bug