当前位置: X-MOL 学术World Wide Web › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable recommendation: when design meets trust calibration
World Wide Web ( IF 3.7 ) Pub Date : 2021-08-02 , DOI: 10.1007/s11280-021-00916-0
Mohammad Naiseh 1 , Dena Al-Thani 2 , Nan Jiang 1 , Raian Ali 2
Affiliation  

Human-AI collaborative decision-making tools are being increasingly applied in critical domains such as healthcare. However, these tools are often seen as closed and intransparent for human decision-makers. An essential requirement for their success is the ability to provide explanations about themselves that are understandable and meaningful to the users. While explanations generally have positive connotations, studies showed that the assumption behind users interacting and engaging with these explanations could introduce trust calibration errors such as facilitating irrational or less thoughtful agreement or disagreement with the AI recommendation. In this paper, we explore how to help trust calibration through explanation interaction design. Our research method included two main phases. We first conducted a think-aloud study with 16 participants aiming to reveal main trust calibration errors concerning explainability in AI-Human collaborative decision-making tools. Then, we conducted two co-design sessions with eight participants to identify design principles and techniques for explanations that help trust calibration. As a conclusion of our research, we provide five design principles: Design for engagement, challenging habitual actions, attention guidance, friction and support training and learning. Our findings are meant to pave the way towards a more integrated framework for designing explanations with trust calibration as a primary goal.



中文翻译:

可解释的建议:当设计符合信任校准时

人机协作决策工具正越来越多地应用于医疗保健等关键领域。然而,这些工具通常被认为对人类决策者来说是封闭和不透明的。他们成功的一个基本要求是能够提供对用户来说可以理解和有意义的关于他们自己的解释。虽然解释通常具有积极的含义,但研究表明,用户与这些解释进行交互和参与背后的假设可能会引入信任校准错误,例如促进对 AI 建议的非理性或不那么深思熟虑的同意或不同意。在本文中,我们探讨了如何通过解释交互设计来帮助信任校准。我们的研究方法包括两个主要阶段。我们首先与 16 名参与者进行了一项有声思考的研究,旨在揭示与 AI-Human 协作决策工具的可解释性有关的主要信任校准错误。然后,我们与八名参与者进行了两次共同设计会议,以确定有助于信任校准的解释设计原则和技术。作为我们研究的结论,我们提供了五项设计原则:参与设计、挑战习惯性行为、注意力引导、摩擦和支持培训和学习。我们的研究结果旨在为更综合的框架铺平道路,以设计以信任校准为主要目标的解释。我们与八名参与者进行了两次共同设计会议,以确定有助于信任校准的解释设计原则和技术。作为我们研究的结论,我们提供了五项设计原则:参与设计、挑战习惯性行为、注意力引导、摩擦和支持培训和学习。我们的研究结果旨在为更综合的框架铺平道路,以设计以信任校准为主要目标的解释。我们与八名参与者进行了两次共同设计会议,以确定有助于信任校准的解释设计原则和技术。作为我们研究的结论,我们提供了五项设计原则:参与设计、挑战习惯性行为、注意力引导、摩擦和支持培训和学习。我们的研究结果旨在为更综合的框架铺平道路,以设计以信任校准为主要目标的解释。

更新日期:2021-08-03
down
wechat
bug