当前位置: X-MOL 学术Auton. Agent. Multi-Agent Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Trust repair in human-agent teams: the effectiveness of explanations and expressing regret
Autonomous Agents and Multi-Agent Systems ( IF 2.0 ) Pub Date : 2021-06-18 , DOI: 10.1007/s10458-021-09515-9
E. S. Kox , J. H. Kerstholt , T. F. Hueting , P. W. de Vries

The role of intelligent agents becomes more social as they are expected to act in direct interaction, involvement and/or interdependency with humans and other artificial entities, as in Human-Agent Teams (HAT). The highly interdependent and dynamic nature of teamwork demands correctly calibrated trust among team members. Trust violations are an inevitable aspect of the cycle of trust and since repairing damaged trust proves to be more difficult than building trust initially, effective trust repair strategies are needed to ensure durable and successful team performance. The aim of this study was to explore the effectiveness of different trust repair strategies from an intelligent agent by measuring the development of human trust and advice taking in a Human-Agent Teaming task. Data for this study were obtained using a task environment resembling a first-person shooter game. Participants carried out a mission in collaboration with their artificial team member. A trust violation was provoked when the agent failed to detect an approaching enemy. After this, the agent offered one of four trust repair strategies, composed of the apology components explanation and expression of regret (either one alone, both or neither). Our results indicated that expressing regret was crucial for effective trust repair. After trust declined due to the violation by the agent, trust only significantly recovered when an expression of regret was included in the apology. This effect was stronger when an explanation was added. In this context, the intelligent agent was the most effective in its attempt of rebuilding trust when it provided an apology that was both affective, and informational. Finally, the implications of our findings for the design and study of Human-Agent trust repair are discussed.



中文翻译:

人工代理团队中的信任修复:解释和表达遗憾的有效性

智能代理的角色变得更加社会化,因为它们被期望与人类和其他人工实体直接交互、参与和/或相互依赖,如在人类代理团队 (HAT) 中。团队合作的高度相互依赖和动态性质要求团队成员之间正确校准信任。信任违规是信任循环中不可避免的一个方面,由于修复受损的信任比最初建立信任更困难,因此需要有效的信任修复策略来确保持久和成功的团队绩效。本研究的目的是通过测量人类-代理协作任务中人类信任和建议的发展,探索智能代理的不同信任修复策略的有效性。本研究的数据是使用类似于第一人称射击游戏的任务环境获得的。参与者与他们的人工团队成员合作执行任务。当代理未能检测到接近的敌人时,就会引发信任违规。在此之后,代理提供了四种信任修复策略中的一种,由道歉组件组成遗憾的解释表达(单独一个,两个或两个都不)。我们的结果表明,表达遗憾对于有效的信任修复至关重要。由于代理人的违规行为导致信任下降后,只有在道歉中包含遗憾的表达时,信任才会显着恢复。添加解释后,这种效果会更强。在这种情况下,智能代理在尝试重建信任时是最有效的,它提供了一种既有情感又有信息的道歉。最后,讨论了我们的发现对人类代理信任修复的设计和研究的影响。

更新日期:2021-06-18
down
wechat
bug