当前位置: X-MOL 学术J. Multimodal User Interfaces › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
“Let me explain!”: exploring the potential of virtual agents in explainable AI interaction design
Journal on Multimodal User Interfaces ( IF 2.2 ) Pub Date : 2020-07-09 , DOI: 10.1007/s12193-020-00332-0
Katharina Weitz , Dominik Schiller , Ruben Schlagowski , Tobias Huber , Elisabeth André

While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility, especially for end-users. In this paper, we explore the effects of incorporating virtual agents into explainable artificial intelligence (XAI) designs on the perceived trust of end-users. For this purpose, we conducted a user study based on a simple speech recognition system for keyword classification. As a result of this experiment, we found that the integration of virtual agents leads to increased user trust in the XAI system. Furthermore, we found that the user’s trust significantly depends on the modalities that are used within the user-agent interface design. The results of our study show a linear trend where the visual presence of an agent combined with a voice output resulted in greater trust than the output of text or the voice output alone. Additionally, we analysed the participants’ feedback regarding the presented XAI visualisations. We found that increased human-likeness of and interaction with the virtual agent are the two most common mention points on how to improve the proposed XAI interaction design. Based on these results, we discuss current limitations and interesting topics for further research in the field of XAI. Moreover, we present design recommendations for virtual agents in XAI systems for future projects.



中文翻译:

“让我解释!”:探索虚拟代理在可解释的AI交互设计中的潜力

近年来,尽管人工智能研究领域受益于日趋完善的机器学习技术,但最终的系统却失去了透明度和可理解性,尤其是对于最终用户而言。在本文中,我们探讨了将虚拟代理合并到可解释的人工智能(XAI)设计中对最终用户感知信任的影响。为此,我们基于简单的语音识别系统对关键字进行了用户研究。作为该实验的结果,我们发现虚拟代理的集成导致XAI系统中用户信任度的提高。此外,我们发现用户的信任度很大程度上取决于在用户代理界面设计中使用的方式。我们的研究结果显示出一种线性趋势,其中代理的视觉存在与语音输出相结合比单独的文本输出或语音输出产生更大的信任。此外,我们分析了参与者对所呈现的XAI可视化效果的反馈。我们发现,增加虚拟人与虚拟人的相似度以及与虚拟人的互动是如何改进建议的XAI交互设计的两个最常见的提及点。基于这些结果,我们讨论了当前的局限性和有趣的主题,以供XAI领域的进一步研究。此外,我们提出了XAI系统中虚拟代理用于未来项目的设计建议。我们分析了参与者对所呈现的XAI可视化效果的反馈。我们发现,增加虚拟人与虚拟人的相似度以及与虚拟人的互动是如何改进建议的XAI交互设计的两个最常见的提及点。基于这些结果,我们讨论了当前的局限性和有趣的主题,以供XAI领域的进一步研究。此外,我们提出了XAI系统中虚拟代理用于未来项目的设计建议。我们分析了参与者对所呈现的XAI可视化效果的反馈。我们发现,增加虚拟人与虚拟人的相似度以及与虚拟人的互动是如何改进建议的XAI交互设计的两个最常见的提及点。基于这些结果,我们讨论了当前的局限性和有趣的主题,以供XAI领域的进一步研究。此外,我们为XAI系统中的虚拟代理提出了针对未来项目的设计建议。

更新日期:2020-07-09
down
wechat
bug