当前位置: X-MOL 学术arXiv.cs.GR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MU-GAN: Facial Attribute Editing based on Multi-attention Mechanism
arXiv - CS - Graphics Pub Date : 2020-09-09 , DOI: arxiv-2009.04177
Ke Zhang, Yukun Su, Xiwang Guo, Liang Qi, and Zhenbing Zhao

Facial attribute editing has mainly two objectives: 1) translating image from a source domain to a target one, and 2) only changing the facial regions related to a target attribute and preserving the attribute-excluding details. In this work, we propose a Multi-attention U-Net-based Generative Adversarial Network (MU-GAN). First, we replace a classic convolutional encoder-decoder with a symmetric U-Net-like structure in a generator, and then apply an additive attention mechanism to build attention-based U-Net connections for adaptively transferring encoder representations to complement a decoder with attribute-excluding detail and enhance attribute editing ability. Second, a self-attention mechanism is incorporated into convolutional layers for modeling long-range and multi-level dependencies across image regions. experimental results indicate that our method is capable of balancing attribute editing ability and details preservation ability, and can decouple the correlation among attributes. It outperforms the state-of-the-art methods in terms of attribute manipulation accuracy and image quality.

中文翻译:

MU-GAN:基于多注意力机制的人脸属性编辑

面部属性编辑主要有两个目标:1)将图像从源域转换为目标域,2)仅更改与目标属性相关的面部区域并保留不包括属性的细节。在这项工作中,我们提出了一个基于多注意 U-Net 的生成对抗网络 (MU-GAN)。首先,我们在生成器中用对称的类似 U-Net 的结构替换经典的卷积编码器-解码器,然后应用附加注意机制来构建基于注意的 U-Net 连接,以自适应地传输编码器表示,以补充具有属性的解码器-剔除细节,增强属性编辑能力。其次,将自我注意机制纳入卷积层,用于对跨图像区域的远程和多级依赖项进行建模。实验结果表明,我们的方法能够平衡属性编辑能力和细节保存能力,并且可以解耦属性之间的相关性。它在属性操作精度和图像质量方面优于最先进的方法。
更新日期:2020-09-10
down
wechat
bug