当前位置: X-MOL 学术Comput. Animat. Virtual Worlds › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A generic framework for editing and synthesizing multimodal data with relative emotion strength
Computer Animation and Virtual Worlds ( IF 1.1 ) Pub Date : 2019-02-04 , DOI: 10.1002/cav.1871
Jacky C. P. Chan 1 , Hubert P. H. Shum 2 , He Wang 3 , Li Yi 4 , Wei Wei 5 , Edmond S. L. Ho 2
Affiliation  

Emotion is considered to be a core element in performances. In computer animation, both body motions and facial expressions are two popular mediums for a character to express the emotion. However, there has been limited research in studying how to effectively synthesize these two types of character movements using different levels of emotion strength with intuitive control, which is difficult to be modeled effectively. In this work, we explore a common model that can be used to represent the emotion for the applications of body motions and facial expressions synthesis. Unlike previous work that encode emotions into discrete motion style descriptors, we propose a continuous control indicator called emotion strength by controlling which a data‐driven approach is presented to synthesize motions with fine control over emotions. Rather than interpolating motion features to synthesize new motion as in existing work, our method explicitly learns a model mapping low‐level motion features to the emotion strength. Because the motion synthesis model is learned in the training stage, the computation time required for synthesizing motions at run time is very low. We further demonstrate the generality of our proposed framework by editing 2D face images using relative emotion strength. As a result, our method can be applied to interactive applications such as computer games, image editing tools, and virtual reality applications, as well as offline applications such as animation and movie production.

中文翻译:

用于编辑和合成具有相对情绪强度的多模态数据的通用框架

情感被认为是表演的核心要素。在计算机动画中,身体动作和面部表情都是角色表达情感的两种流行媒介。然而,关于如何使用不同级别的情绪强度和直观控制来有效合成这两种类型的角色动作的研究有限,难以有效建模。在这项工作中,我们探索了一个通用模型,该模型可用于表示身体动作和面部表情合成应用的情绪。与之前将情绪编码为离散运动风格描述符的工作不同,我们提出了一种称为情绪强度的连续控制指标,通过控制呈现数据驱动的方法来合成运动,并对情绪进行精细控制。我们的方法不是像现有工作那样通过插入运动特征来合成新的运动,而是明确地学习将低级运动特征映射到情绪强度的模型。因为运动合成模型是在训练阶段学习的,所以在运行时合成运动所需的计算时间非常少。我们通过使用相对情绪强度编辑 2D 人脸图像进一步证明了我们提出的框架的通用性。因此,我们的方法可以应用于交互式应用程序,如计算机游戏、图像编辑工具和虚拟现实应用程序,以及离线应用程序,如动画和电影制作。因为运动合成模型是在训练阶段学习的,所以在运行时合成运动所需的计算时间非常少。我们通过使用相对情绪强度编辑 2D 人脸图像进一步证明了我们提出的框架的通用性。因此,我们的方法可以应用于交互式应用程序,如计算机游戏、图像编辑工具和虚拟现实应用程序,以及离线应用程序,如动画和电影制作。因为运动合成模型是在训练阶段学习的,所以在运行时合成运动所需的计算时间非常少。我们通过使用相对情绪强度编辑 2D 人脸图像进一步证明了我们提出的框架的通用性。因此,我们的方法可以应用于交互式应用程序,如计算机游戏、图像编辑工具和虚拟现实应用程序,以及离线应用程序,如动画和电影制作。
更新日期:2019-02-04
down
wechat
bug