当前位置: X-MOL 学术arXiv.cs.CV › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Toward Fine-grained Facial Expression Manipulation
arXiv - CS - Computer Vision and Pattern Recognition Pub Date : 2020-04-07 , DOI: arxiv-2004.03132
Jun Ling, Han Xue, Li Song, Shuhui Yang, Rong Xie, Xiao Gu

Facial expression manipulation, as an image-to-image translation problem, aims at editing facial expression with a given condition. Previous methods edit an input image under the guidance of a discrete emotion label or absolute condition (e.g., facial action units) to possess the desired expression. However, these methods either suffer from changing condition-irrelevant regions or are inefficient to preserve image quality. In this study, we take these two objectives into consideration and propose a novel conditional GAN model. First, we replace continuous absolute condition with relative condition, specifically, relative action units. With relative action units, the generator learns to only transform regions of interest which are specified by non-zero-valued relative AUs, avoiding estimating the current AUs of input image. Second, our generator is built on U-Net architecture and strengthened by multi-scale feature fusion (MSF) mechanism for high-quality expression editing purpose. Extensive experiments on both quantitative and qualitative evaluation demonstrate the improvements of our proposed approach compared with the state-of-the-art expression editing methods.

中文翻译:

走向细粒度的面部表情处理

面部表情处理作为图像到图像的转换问题,旨在在给定条件下编辑面部表情。先前的方法在离散情感标签或绝对条件(例如,面部动作单元)的指导下编辑输入图像以拥有所需的表情。然而,这些方法要么受到与条件无关的区域变化的影响,要么在保持图像质量方面效率低下。在这项研究中,我们考虑了这两个目标,并提出了一种新的条件 GAN 模型。首先,我们将连续的绝对条件替换为相对条件,即相对动作单元。使用相对动作单元,生成器学习只转换由非零值相对 AU 指定的感兴趣区域,避免估计输入图像的当前 AU。第二,我们的生成器建立在 U-Net 架构上,并通过多尺度特征融合 (MSF) 机制加强,以实现高质量的表达编辑目的。与最先进的表达编辑方法相比,定量和定性评估的大量实验证明了我们提出的方法的改进。
更新日期:2020-04-08
down
wechat
bug