当前位置: X-MOL 学术arXiv.cs.GR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Text2Gestures: A Transformer-Based Network for Generating Emotive Body Gestures for Virtual Agents
arXiv - CS - Graphics Pub Date : 2021-01-26 , DOI: arxiv-2101.11101
Uttaran Bhattacharya, Nicholas Rewkowski, Abhishek Banerjee, Pooja Guhan, Aniket Bera, Dinesh Manocha

We present Text2Gestures, a transformer-based learning method to interactively generate emotive full-body gestures for virtual agents aligned with natural language text inputs. Our method generates emotionally expressive gestures by utilizing the relevant biomechanical features for body expressions, also known as affective features. We also consider the intended task corresponding to the text and the target virtual agents' intended gender and handedness in our generation pipeline. We train and evaluate our network on the MPI Emotional Body Expressions Database and observe that our network produces state-of-the-art performance in generating gestures for virtual agents aligned with the text for narration or conversation. Our network can generate these gestures at interactive rates on a commodity GPU. We conduct a web-based user study and observe that around 91% of participants indicated our generated gestures to be at least plausible on a five-point Likert Scale. The emotions perceived by the participants from the gestures are also strongly positively correlated with the corresponding intended emotions, with a minimum Pearson coefficient of 0.77 in the valence dimension.

中文翻译:

Text2Gestures:一种基于变压器的网络,用于为虚拟代理生成情绪化的身体姿势

我们介绍了Text2Gestures,这是一种基于变压器的学习方法,可为与自然语言文本输入对齐的虚拟代理交互生成情感全身手势。我们的方法通过利用身体表情的相关生物力学特征(也称为情感特征)来生成情感表达手势。我们还将考虑与文本相对应的预期任务以及目标虚拟代理在我们的生成渠道中的预期性别和惯用性。我们在MPI情绪体表达数据库上训练和评估了我们的网络,并观察到我们的网络在为虚拟代理生成与文本进行叙述或对话的手势时产生了最先进的性能。我们的网络可以在商品GPU上以交互速率生成这些手势。我们进行了基于网络的用户研究,发现大约91%的参与者表示我们产生的手势至少在五点李克特量表上是合理的。参与者从手势中感知到的情绪也与相应的预期情绪强烈正相关,在化合价维度上的最小皮尔逊系数为0.77。
更新日期:2021-01-28
down
wechat
bug