当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Establishing Human-Robot Trust through Music-Driven Robotic Emotion Prosody and Gesture
arXiv - CS - Robotics Pub Date : 2020-01-11 , DOI: arxiv-2001.05863
Richard Savery, Ryan Rose, Gil Weinberg

As human-robot collaboration opportunities continue to expand, trust becomes ever more important for full engagement and utilization of robots. Affective trust, built on emotional relationship and interpersonal bonds is particularly critical as it is more resilient to mistakes and increases the willingness to collaborate. In this paper we present a novel model built on music-driven emotional prosody and gestures that encourages the perception of a robotic identity, designed to avoid uncanny valley. Symbolic musical phrases were generated and tagged with emotional information by human musicians. These phrases controlled a synthesis engine playing back pre-rendered audio samples generated through interpolation of phonemes and electronic instruments. Gestures were also driven by the symbolic phrases, encoding the emotion from the musical phrase to low degree-of-freedom movements. Through a user study we showed that our system was able to accurately portray a range of emotions to the user. We also showed with a significant result that our non-linguistic audio generation achieved an 8% higher mean of average trust than using a state-of-the-art text-to-speech system.

中文翻译:

通过音乐驱动的机器人情感韵律和手势建立人机信任

随着人机协作机会的不断扩大,信任对于机器人的充分参与和利用变得越来越重要。建立在情感关系和人际纽带上的情感信任尤其重要,因为它更能抵御错误并增加合作意愿。在本文中,我们提出了一种建立在音乐驱动的情感韵律和手势上的新模型,该模型鼓励对机器人身份的感知,旨在避免恐怖谷。象征性的乐句是由人类音乐家生成并用情感信息标记的。这些短语控制合成引擎播放通过音素和电子乐器的插值生成的预渲染音频样本。手势也受到象征性短语的驱动,将乐句中的情感编码为低自由度运动。通过用户研究,我们表明我们的系统能够准确地向用户描绘一系列情绪。我们还展示了一个重要的结果,即我们的非语言音频生成比使用最先进的文本到语音系统的平均信任度高 8%。
更新日期:2020-01-17
down
wechat
bug