当前位置: X-MOL 学术IEEE Comput. Graph. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Example-Based Facial Animation of Virtual Reality Avatars Using Auto-Regressive Neural Networks
IEEE Computer Graphics and Applications ( IF 1.7 ) Pub Date : 2021-03-23 , DOI: 10.1109/mcg.2021.3068035
Wolfgang Paier 1 , Anna Hilsmann 1 , Peter Eisert 1
Affiliation  

This article presents a hybrid animation approach that combines example-based and neural animation methods to create a simple, yet powerful animation regime for human faces. Example-based methods usually employ a database of prerecorded sequences that are concatenated or looped in order to synthesize novel animations. In contrast to this traditional example-based approach, we introduce a light-weight auto-regressive network to transform our animation-database into a parametric model. During training, our network learns the dynamics of facial expressions, which enables the replay of annotated sequences from our animation database as well as their seamless concatenation in new order. This representation is especially useful for the synthesis of visual speech, where coarticulation creates interdependencies between adjacent visemes, which affects their appearance. Instead of creating an exhaustive database that contains all viseme variants, we use our animation-network to predict the correct appearance. This allows realistic synthesis of novel facial animation sequences like visual-speech but also general facial expressions in an example-based manner.

中文翻译:

使用自回归神经网络的基于示例的虚拟现实头像面部动画

本文介绍了一种混合动画方法,该方法结合了基于示例的动画方法和神经动画方法,可为人脸创建简单而强大的动画机制。基于示例的方法通常采用连接或循环的预先录制序列的数据库,以合成新颖的动画。与这种传统的基于示例的方法相比,我们引入了一个轻量级的自回归网络,将我们的动画数据库转换为参数模型。在训练期间,我们的网络学习面部表情的动态,这使得我们可以从动画数据库中回放带注释的序列,并以新的顺序无缝连接它们。这种表示对于视觉语音的合成特别有用,在这种情况下,协同发音会在相邻的视位之间产生相互依赖性,这会影响它们的外观。我们没有创建包含所有视位变体的详尽数据库,而是使用我们的动画网络来预测正确的外观。这允许以基于示例的方式真实地合成新颖的面部动画序列,如视觉语音以及一般的面部表情。
更新日期:2021-03-23
down
wechat
bug