当前位置: X-MOL 学术Int. J. Comput. Vis. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SliderGAN: Synthesizing Expressive Face Images by Sliding 3D Blendshape Parameters
International Journal of Computer Vision ( IF 19.5 ) Pub Date : 2020-06-11 , DOI: 10.1007/s11263-020-01338-7
Evangelos Ververas , Stefanos Zafeiriou

Image-to-image (i2i) translation is the dense regression problem of learning how to transform an input image into an output using aligned image pairs. Remarkable progress has been made in i2i translation with the advent of deep convolutional neural networks and particular using the learning paradigm of generative adversarial networks (GANs). In the absence of paired images, i2i translation is tackled with one or multiple domain transformations (i.e., CycleGAN, StarGAN etc.). In this paper, we study the problem of image-to-image translation, under a set of continuous parameters that correspond to a model describing a physical process. In particular, we propose the SliderGAN which transforms an input face image into a new one according to the continuous values of a statistical blendshape model of facial motion. We show that it is possible to edit a facial image according to expression and speech blendshapes, using sliders that control the continuous values of the blendshape model. This provides much more flexibility in various tasks, including but not limited to face editing, expression transfer and face neutralisation, comparing to models based on discrete expressions or action units.

中文翻译:

SliderGAN:通过滑动 3D Blendshape 参数合成富有表现力的人脸图像

图像到图像 (i2i) 转换是学习如何使用对齐的图像对将输入图像转换为输出的密集回归问题。随着深度卷积神经网络的出现,特别是使用生成对抗网络 (GAN) 的学习范式,i2i 翻译取得了显着进展。在没有配对图像的情况下,i2i 转换通过一个或多个域转换(即 CycleGAN、StarGAN 等)来解决。在本文中,我们在一组连续参数下研究图像到图像的转换问题,这些参数对应于描述物理过程的模型。特别是,我们提出了 SliderGAN,它根据面部运动的统计混合形状模型的连续值将输入的面部图像转换为新的图像。我们展示了可以根据表情和语音混合形状编辑面部图像,使用滑块控制混合形状模型的连续值。与基于离散表情或动作单元的模型相比,这在各种任务中提供了更大的灵活性,包括但不限于面部编辑、表情转移和面部中和。
更新日期:2020-06-11
down
wechat
bug