当前位置: X-MOL 学术Comput. Hum. Behav. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The effect of social-cognitive recovery strategies on likability, capability and trust in social robots
Computers in Human Behavior ( IF 9.0 ) Pub Date : 2021-01-01 , DOI: 10.1016/j.chb.2020.106561
David Cameron , Stevienna de Saille , Emily C. Collins , Jonathan M. Aitken , Hugo Cheung , Adriel Chua , Ee Jing Loh , James Law

Abstract As robots become more prevalent, particularly in complex public and domestic settings, they will be increasingly challenged by dynamic situations that could result in performance errors. Such errors can have a harmful impact on a user’s trust and confidence in the technology, potentially reducing use and preventing full realization of its benefits. A potential countermeasure, based on social psychological concepts of trust, is for robots to demonstrate self-awareness and ownership of their mistakes to mitigate the impact of errors and increase users’ affinity towards the robot. We describe an experiment examining 326 people’s perceptions of a mobile guide robot that employs synthetic social behaviours to elicit trust in its use after error. We find that a robot that identifies its mistake, and communicates its intention to rectify the situation, is considered by observers to be more capable than one that simply apologizes for its mistake. However, the latter is considered more likeable and, uniquely, increases people’s intention to use the robot. These outcomes highlight that the complex and multifaceted nature of trust in human–robot interaction may extend beyond established approaches considering robots’ capability in performance and indicate that social cognitive models are valuable in developing trustworthy synthetic social agents.

中文翻译:

社交认知恢复策略对社交机器人的好感度、能力和信任度的影响

摘要 随着机器人变得越来越普遍,特别是在复杂的公共和家庭环境中,它们将越来越受到可能导致性能错误的动态情况的挑战。此类错误会对用户对该技术的信任和信心产生有害影响,可能会减少使用并阻止其优势的充分实现。基于信任的社会心理学概念,一个潜在的对策是让机器人展示自我意识和对错误的所有权,以减轻错误的影响并增加用户对机器人的亲和力。我们描述了一个实验,该实验检查了 326 人对移动引导机器人的看法,该机器人采用综合社会行为来引起对其使用错误后的信任。我们发现一个识别错误的机器人,并传达其纠正这种情况的意图,被观察者认为比简单地为其错误道歉的人更有能力。然而,后者被认为更讨人喜欢,并且独特地增加了人们使用机器人的意愿。这些结果突出表明,人机交互中信任的复杂和多方面性质可能超出考虑机器人性能能力的既定方法,并表明社会认知模型在开发可信赖的合成社会代理方面很有价值。
更新日期:2021-01-01
down
wechat
bug