当前位置: X-MOL 学术Int. J. Hum. Comput. Stud. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The effects of explainability and causability on perception, trust, and acceptance: Implications for explainable AI
International Journal of Human-Computer Studies ( IF 5.3 ) Pub Date : 2020-10-16 , DOI: 10.1016/j.ijhcs.2020.102551
Donghee Shin

Artificial intelligence and algorithmic decision-making processes are increasingly criticized for their black-box nature. Explainable AI approaches to trace human-interpretable decision processes from algorithms have been explored. Yet, little is known about algorithmic explainability from a human factors’ perspective. From the perspective of user interpretability and understandability, this study examines the effect of explainability in AI on user trust and attitudes toward AI. It conceptualizes causability as an antecedent of explainability and as a key cue of an algorithm and examines them in relation to trust by testing how they affect user perceived performance of AI-driven services. The results show the dual roles of causability and explainability in terms of its underlying links to trust and subsequent user behaviors. Explanations of why certain news articles are recommended generate users trust whereas causability of to what extent they can understand the explanations affords users emotional confidence. Causability lends the justification for what and how should be explained as it determines the relative importance of the properties of explainability. The results have implications for the inclusion of causability and explanatory cues in AI systems, which help to increase trust and help users to assess the quality of explanations. Causable explainable AI will help people understand the decision-making process of AI algorithms by bringing transparency and accountability into AI systems.



中文翻译:

可解释性和因果关系对感知,信任和接受的影响:对可解释性AI的启示

人工智能和算法决策过程因其黑盒性质而受到越来越多的批评。已经探索了可解释的AI方法来从算法追踪人类可解释的决策过程。然而,从人为因素的角度,关于算法的可解释性知之甚少。从用户可解释性和可理解性的角度出发,本研究考察了AI中的可解释性对用户信任度和对AI态度的影响。它将因果关系概念化为可解释性的先决条件,并将其作为算法的关键线索,并通过测试可信任度如何影响用户感知的AI驱动服务性能来检查它们与信任的关系。结果显示了因果关系和可解释性在其与信任的基础链接以及随后的用户行为方面的双重作用。为何推荐某些新闻文章的解释会引起用户信任,而他们能在多大程度上理解这些解释的因果关系会给用户带来情感上的信心。因果关系为应解释什么和如何解释提供了理由,因为它决定了可解释性属性的相对重要性。结果对于在AI系统中包含因果关系和解释线索具有暗示意义,这有助于增加信任度并帮助用户评估解释的质量。通过将透明性和问责制引入AI系统中,可解释的可解释AI将帮助人们理解AI算法的决策过程。因果关系为应解释什么和如何解释提供了理由,因为它决定了可解释性属性的相对重要性。结果对在AI系统中包含因果关系和解释提示具有暗示意义,这有助于增加信任度并帮助用户评估解释的质量。通过将透明性和问责制引入AI系统,可解释的可解释AI将帮助人们了解AI算法的决策过程。因果关系为应解释什么和如何解释提供了理由,因为它决定了可解释性属性的相对重要性。结果对于在AI系统中包含因果关系和解释线索具有暗示意义,这有助于增加信任度并帮助用户评估解释的质量。通过将透明性和问责制引入AI系统,可行的,可解释的AI将帮助人们理解AI算法的决策过程。

更新日期:2020-11-02
down
wechat
bug