当前位置: X-MOL 学术arXiv.cs.MM › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SLGAN: Style- and Latent-guided Generative Adversarial Network for Desirable Makeup Transfer and Removal
arXiv - CS - Multimedia Pub Date : 2020-09-16 , DOI: arxiv-2009.07557
Daichi Horita and Kiyoharu Aizawa

There are five features to consider when using generative adversarial networks to apply makeup to photos of the human face. These features include (1) facial components, (2) interactive color adjustments, (3) makeup variations, (4) robustness to poses and expressions, and the (5) use of multiple reference images. Several related works have been proposed, mainly using generative adversarial networks (GAN). Unfortunately, none of them have addressed all five features simultaneously. This paper closes the gap with an innovative style- and latent-guided GAN (SLGAN). We provide a novel, perceptual makeup loss and a style-invariant decoder that can transfer makeup styles based on histogram matching to avoid the identity-shift problem. In our experiments, we show that our SLGAN is better than or comparable to state-of-the-art methods. Furthermore, we show that our proposal can interpolate facial makeup images to determine the unique features, compare existing methods, and help users find desirable makeup configurations.

中文翻译:

SLGAN:风格和潜在引导的生成对抗网络,用于理想的化妆转移和去除

在使用生成对抗网络对人脸照片进行化妆时,需要考虑五个特征。这些特征包括 (1) 面部组件、(2) 交互式颜色调整、(3) 妆容变化、(4) 姿势和表情的鲁棒性以及 (5) 使用多个参考图像。已经提出了一些相关的工作,主要使用生成对抗网络(GAN)。不幸的是,他们都没有同时解决所有五个功能。本文通过一种创新的风格和潜在引导的 GAN (SLGAN) 弥补了这一差距。我们提供了一种新颖的感知化妆损失和风格不变的解码器,可以根据直方图匹配转移化妆风格以避免身份转换问题。在我们的实验中,我们表明我们的 SLGAN 优于或可与最先进的方法相媲美。此外,
更新日期:2020-09-25
down
wechat
bug