当前位置: X-MOL 学术Int. J. Soc. Robotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robot Likeability and Reciprocity in Human Robot Interaction: Using Ultimatum Game to determinate Reciprocal Likeable Robot Strategies
International Journal of Social Robotics ( IF 4.7 ) Pub Date : 2020-06-19 , DOI: 10.1007/s12369-020-00658-5
Eduardo Benítez Sandoval , Jürgen Brandstatter , Utku Yalcin , Christoph Bartneck

Among of the factors that affect likeability, reciprocal response towards the other party is one of the multiple variables involved in social interaction. However, in HRI, likeability is constrained to robot behavior, since mass-produced robots will have identical physical embodiment. A reciprocal robot response is desirable in order to design robots as likeable agents for humans. In this paper, we discuss how perceived likeability in robots is a crucial multi-factorial phenomenon that has a strong influence on interactions based on reciprocal robot decisions. Our general research question is: What type of reciprocal robot behavior is perceived as likeable for humans when the robot’s decisions affect them? We designed a between/within \(2 \times 2 \times 2\) experiment in which the participant plays our novel Alternated Repeated Ultimatum Game (ARUG) for 20 rounds. The robot used in the experiment is an NAO robot using four different reciprocal strategies. Our results suggest that participants tend to reciprocate more towards the robot who starts the game and using the pure reciprocal strategy compared with other combined strategies (Tit for Tat, Inverse Tit for Tat and Reciprocal Offer and Inverse Reciprocal Offer). These results confirm that the Norm of the Reciprocity applies in HRI when participants play ARUG with social robots. However, the human reciprocal response also depends on the profits gained in the game and who starts the interaction. Similarly, the likeability score is affected by robot strategies such as reciprocal (Robot A) and generous (Robot C). and there are some discrepancies in the likeability score between the reciprocal robot and the generous robot behavior.



中文翻译:

人机交互中的机器人相似性和互惠性:使用最后通Game游戏确定相互喜欢的机器人策略

在影响喜好度的因素中,对另一方的相互反应是社交互动中涉及的多个变量之一。但是,在HRI中,相似性受到机器人行为的限制,因为大量生产的机器人将具有相同的物理实施方式。为了将机器人设计为对人类有益的代理,需要相互的机器人响应。在本文中,我们讨论了机器人的感知相似度如何是一种至关重要的多因素现象,该现象对基于相互机器人决策的交互具有重要影响。我们的一般研究问题是:当机器人的决策影响人类时,哪种交互的机器人行为被认为对人类是可喜的?我们设计了\(2 \ times 2 \ times 2 \)之间/之内参与者参加我们新颖的交替重复最后通Game游戏(ARUG)的实验)进行20发。实验中使用的机器人是使用四种互惠策略的NAO机器人。我们的结果表明,与其他组合策略(针锋相对,针锋相对和反向报价以及反向报价)相比,参与者倾向于向开始游戏并使用纯互惠策略的机器人进行更多的回报。这些结果证实,当参与者使用社交机器人玩ARUG时,互惠准则适用于HRI。但是,人类的相互反应还取决于在游戏中获得的利润以及谁开始进行交互。类似地,相似性得分会受到诸如互惠(机器人A)和慷慨(机器人C)之类的机器人策略的影响。相互的机器人与慷慨的机器人行为之间的相似性得分存在一些差异。

更新日期:2020-06-19
down
wechat
bug