当前位置: X-MOL 学术Sci. Robot. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A tale of two explanations: Enhancing human trust by explaining robot behavior
Science Robotics ( IF 26.1 ) Pub Date : 2019-12-18 , DOI: 10.1126/scirobotics.aay4663
Mark Edmonds 1 , Feng Gao 2 , Hangxin Liu 1 , Xu Xie 2 , Siyuan Qi 1 , Brandon Rothrock 3 , Yixin Zhu 2 , Ying Nian Wu 2 , Hongjing Lu 2, 4 , Song-Chun Zhu 1, 2
Affiliation  

Forms of explanation that are best suited to foster trust do not necessarily correspond to those components contributing to the best task performance. The ability to provide comprehensive explanations of chosen actions is a hallmark of intelligence. Lack of this ability impedes the general acceptance of AI and robot systems in critical tasks. This paper examines what forms of explanations best foster human trust in machines and proposes a framework in which explanations are generated from both functional and mechanistic perspectives. The robot system learns from human demonstrations to open medicine bottles using (i) an embodied haptic prediction model to extract knowledge from sensory feedback, (ii) a stochastic grammar model induced to capture the compositional structure of a multistep task, and (iii) an improved Earley parsing algorithm to jointly leverage both the haptic and grammar models. The robot system not only shows the ability to learn from human demonstrators but also succeeds in opening new, unseen bottles. Using different forms of explanations generated by the robot system, we conducted a psychological experiment to examine what forms of explanations best foster human trust in the robot. We found that comprehensive and real-time visualizations of the robot’s internal decisions were more effective in promoting human trust than explanations based on summary text descriptions. In addition, forms of explanation that are best suited to foster trust do not necessarily correspond to the model components contributing to the best task performance. This divergence shows a need for the robotics community to integrate model components to enhance both task execution and human trust in machines.

中文翻译:

有两个解释的故事:通过解释机器人行为来增强人的信任

最适合促进信任的解释形式不一定对应于有助于实现最佳任务绩效的那些组成部分。提供所选动作的全面解释的能力是智力的标志。缺乏此功能会阻碍AI和机器人系统在关键任务中的普遍接受。本文探讨了哪种形式的解释最能增强人们对机器的信任,并提出了一个框架,在该框架中,可以从功能和机制两个角度生成解释。机器人系统通过以下操作从人类演示中学习如何打开药瓶:(i)内置的触觉预测模型,从感官反馈中提取知识;(ii)诱导的随机语法模型,用于捕获多步骤任务的组成结构,(iii)改进的Earley解析算法,可以共同利用触觉和语法模型。该机器人系统不仅显示了向人类示威者学习的能力,而且还成功地打开了新的,看不见的瓶子。我们使用机器人系统生成的不同形式的解释,进行了一项心理实验,以研究哪种形式的解释最能增强人们对机器人的信任。我们发现,机器人内部决策的全面实时可视化比基于摘要文本描述的解释更有效地促进了人们的信任。此外,最适合促进信任的解释形式不一定与有助于实现最佳任务性能的模型组件相对应。
更新日期:2019-12-18
down
wechat
bug