当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toward Automated Generation of Affective Gestures from Text:A Theory-Driven Approach
arXiv - CS - Robotics Pub Date : 2021-03-04 , DOI: arxiv-2103.03079
Micol Spitale, Maja J Matarić

Communication in both human-human and human-robot interac-tion (HRI) contexts consists of verbal (speech-based) and non-verbal(facial expressions, eye gaze, gesture, body pose, etc.) components.The verbal component contains semantic and affective information;accordingly, HRI work on the gesture component so far has focusedon rule-based (mapping words to gestures) and data-driven (deep-learning) approaches to generating speech-paired gestures basedon either semantics or the affective state. Consequently, most ges-ture systems are confined to producing either semantically-linkedor affect-based gesticures. This paper introduces an approach forenabling human-robot communication based on a theory-drivenapproach to generate speech-paired robot gestures using both se-mantic and affective information. Our model takes as input textand sentiment analysis, and generates robot gestures in terms oftheir shape, intensity, and speed.

中文翻译:

从文本走向情感手势的自动生成:一种理论驱动的方法

在人机交互和人机交互(HRI)上下文中的交流由口头(基于语音)和非口头(面部表情,眼睛注视,手势,身体姿势等)组成。口头部分包含语义和情感信息;因此,到目前为止,HRI在手势组件上的工作主要集中在基于规则(将单词映射到手势)和数据驱动(深度学习)方法上,以基于语义或情感状态生成语音配对手势。因此,大多数手势系统仅限于产生基于语义的链接或基于情感的手势。本文介绍了一种基于理论驱动的方法来实现人机交互的方法,该方法使用语义和情感信息来生成语音配对的机器人手势。我们的模型将输入文本和情感分析作为输入,
更新日期:2021-03-05
down
wechat
bug