当前位置: X-MOL 学术Int. J. Robot. Res. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-task trust transfer for human–robot interaction
The International Journal of Robotics Research ( IF 7.5 ) Pub Date : 2019-08-19 , DOI: 10.1177/0278364919866905
Harold Soh 1 , Yaqi Xie 1 , Min Chen 1 , David Hsu 1
Affiliation  

Trust is essential in shaping human interactions with one another and with robots. In this article we investigate how human trust in robot capabilities transfers across multiple tasks. We present a human-subject study of two distinct task domains: a Fetch robot performing household tasks and a virtual reality simulation of an autonomous vehicle performing driving and parking maneuvers. The findings expand our understanding of trust and provide new predictive models of trust evolution and transfer via latent task representations: a rational Bayes model, a data-driven neural network model, and a hybrid model that combines the two. Experiments show that the proposed models outperform prevailing models when predicting trust over unseen tasks and users. These results suggest that (i) task-dependent functional trust models capture human trust in robot capabilities more accurately and (ii) trust transfer across tasks can be inferred to a good degree. The latter enables trust-mediated robot decision-making for fluent human–robot interaction in multi-task settings.

中文翻译:

人机交互的多任务信任转移

信任对于塑造人与人之间以及与机器人的互动至关重要。在本文中,我们研究了人类对机器人能力的信任如何跨多个任务转移。我们提出了两个不同任务领域的人类受试者研究:执行家务的 Fetch 机器人和执行驾驶和停车操作的自动驾驶汽车的虚拟现实模拟。这些发现扩展了我们对信任的理解,并通过潜在任务表示提供了信任演化和转移的新预测模型:理性贝叶斯模型、数据驱动的神经网络模型和将两者结合的混合模型。实验表明,在预测对看不见的任务和用户的信任时,所提出的模型优于主流模型。这些结果表明 (i) 依赖于任务的功能信任模型更准确地捕获了人类对机器人能力的信任,并且 (ii) 可以在很大程度上推断跨任务的信任转移。后者支持以信任为中介的机器人决策,以在多任务环境中实现流畅的人机交互。
更新日期:2019-08-19
down
wechat
bug