当前位置: X-MOL 学术Comput. Graph. Forum › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Action Unit Driven Facial Expression Synthesis from a Single Image with Patch Attentive GAN
Computer Graphics Forum ( IF 2.5 ) Pub Date : 2021-03-17 , DOI: 10.1111/cgf.14202
Yong Zhao 1, 2 , Le Yang 1 , Ercheng Pei 1 , Meshia Cédric Oveneke 2, 3 , Mitchel Alioscha‐Perez 2 , Longfei Li 4 , Dongmei Jiang 1, 5 , Hichem Sahli 2, 6
Affiliation  

Recent advances in generative adversarial networks (GANs) have shown tremendous success for facial expression generation tasks. However, generating vivid and expressive facial expressions at Action Units (AUs) level is still challenging, due to the fact that automatic facial expression analysis for AU intensity itself is an unsolved difficult task. In this paper, we propose a novel synthesis-by-analysis approach by leveraging the power of GAN framework and state-of-the-art AU detection model to achieve better results for AU-driven facial expression generation. Specifically, we design a novel discriminator architecture by modifying the patch-attentive AU detection network for AU intensity estimation and combine it with a global image encoder for adversarial learning to force the generator to produce more expressive and realistic facial images. We also introduce a balanced sampling approach to alleviate the imbalanced learning problem for AU synthesis. Extensive experimental results on DISFA and DISFA+ show that our approach outperforms the state-of-the-art in terms of photo-realism and expressiveness of the facial expression quantitatively and qualitatively.

中文翻译:

基于 Patch Attentive GAN 的单个图像的动作单元驱动的面部表情合成

生成对抗网络 (GAN) 的最新进展在面部表情生成任务方面取得了巨大成功。然而,在动作单元 (AU) 级别生成生动而富有表现力的面部表情仍然具有挑战性,因为 AU 强度的自动面部表情分析本身是一项尚未解决的艰巨任务。在本文中,我们通过利用 GAN 框架和最先进的 AU 检测模型的强大功能提出了一种新的分析合成方法,以在 AU 驱动的面部表情生成方面取得更好的结果。具体来说,我们通过修改用于 AU 强度估计的贴片注意 AU 检测网络设计了一种新颖的鉴别器架构,并将其与全局图像编码器相结合进行对抗学习,以强制生成器生成更具表现力和逼真的面部图像。我们还引入了一种平衡采样方法来缓解 AU 合成的不平衡学习问题。在 DISFA 和 DISFA+ 上的大量实验结果表明,我们的方法在定量和定性的面部表情逼真度和表现力方面优于最先进的方法。
更新日期:2021-03-17
down
wechat
bug