当前位置: X-MOL 学术arXiv.cs.HC › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Learning Human-like Hand Reaching for Human-Robot Handshaking
arXiv - CS - Human-Computer Interaction Pub Date : 2021-02-28 , DOI: arxiv-2103.00616
Vignesh Prasad, Ruth Stock-Homburg, Jan Peters

One of the first and foremost non-verbal interactions that humans perform is a handshake. It has an impact on first impressions as touch can convey complex emotions. This makes handshaking an important skill for the repertoire of a social robot. In this paper, we present a novel framework for learning human-robot handshaking behaviours for humanoid robots solely using third-person human-human interaction data. This is especially useful for non-backdrivable robots that cannot be taught by demonstrations via kinesthetic teaching. Our approach can be easily executed on different humanoid robots. This removes the need for re-training, which is especially tedious when training with human-interaction partners. We show this by applying the learnt behaviours on two different humanoid robots with similar degrees of freedom but different shapes and control limits.

中文翻译:

学习类似人的手的伸手可及的机器人机器人握手

人类进行的第一个也是最重要的非语言交互之一是握手。它对第一印象有影响,因为触摸可以传达复杂的情绪。这使得握手成为社交机器人的全部技能中的一项重要技能。在本文中,我们提出了一种仅使用第三人称人与人交互数据来学习类人机器人的人机握手行为的新颖框架。这对于无法通过运动觉示教进行示教的不可逆转机器人特别有用。我们的方法可以轻松地在不同的类人机器人上执行。这消除了重新培训的需求,这在与人际互动伙伴进行培训时特别繁琐。
更新日期:2021-03-02
down
wechat
bug