当前位置: X-MOL 学术EURASIP J. Image Video Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Facial attribute-controlled sketch-to-image translation with generative adversarial networks
EURASIP Journal on Image and Video Processing ( IF 2.0 ) Pub Date : 2020-01-13 , DOI: 10.1186/s13640-020-0489-5
Mingming Hu , Jingtao Guo

Due to the rapid development of the generative adversarial networks (GANs) and convolution neural networks (CNN), increasing attention is being paid to face synthesis. In this paper, we address the new and challenging task of facial sketch-to-image synthesis with multiple controllable attributes. To achieve this goal, first, we propose a new attribute classification loss to ensure that the synthesized face image with the facial attributes, which the users desire to have. Second, we employ the reconstruction loss to synthesize the facial texture and structure information. Third, the adversarial loss is used to encourage visual authenticity. By incorporating above losses into a unified framework, our proposed method not only can achieve high-quality sketch-to-image translation, but also allow the users control the facial attributes of synthesized image. Extensive experiments show that user-provided facial attribute information effectively controls the process of facial sketch-to-image translation.

中文翻译:

具有生成对抗网络的面部属性控制的草图到图像翻译

由于生成对抗网络(GANs)和卷积神经网络(CNN)的快速发展,人们越来越重视脸部合成。在本文中,我们解决了具有多个可控制属性的面部素描到图像合成的新挑战性任务。为了实现该目标,首先,我们提出了一种新的属性分类损失,以确保合成的面部图像具有用户期望拥有的面部属性。其次,我们利用重建损失来合成面部纹理和结构信息。第三,对抗性损失用于鼓励视觉真实性。通过将上述损失纳入统一框架,我们提出的方法不仅可以实现高质量的草图到图像翻译,而且还允许用户控制合成图像的面部属性。大量实验表明,用户提供的面部属性信息可以有效控制面部草图到图像的翻译过程。
更新日期:2020-01-13
down
wechat
bug