当前位置: X-MOL 学术arXiv.cs.GR › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Simple Baseline for StyleGAN Inversion
arXiv - CS - Graphics Pub Date : 2021-04-15 , DOI: arxiv-2104.07661
Tianyi Wei, Dongdong Chen, Wenbo Zhou, Jing Liao, Weiming Zhang, Lu Yuan, Gang Hua, Nenghai Yu

This paper studies the problem of StyleGAN inversion, which plays an essential role in enabling the pretrained StyleGAN to be used for real facial image editing tasks. This problem has the high demand for quality and efficiency. Existing optimization-based methods can produce high quality results, but the optimization often takes a long time. On the contrary, forward-based methods are usually faster but the quality of their results is inferior. In this paper, we present a new feed-forward network for StyleGAN inversion, with significant improvement in terms of efficiency and quality. In our inversion network, we introduce: 1) a shallower backbone with multiple efficient heads across scales; 2) multi-layer identity loss and multi-layer face parsing loss to the loss function; and 3) multi-stage refinement. Combining these designs together forms a simple and efficient baseline method which exploits all benefits of optimization-based and forward-based methods. Quantitative and qualitative results show that our method performs better than existing forward-based methods and comparably to state-of-the-art optimization-based methods, while maintaining the high efficiency as well as forward-based methods. Moreover, a number of real image editing applications demonstrate the efficacy of our method. Our project page is ~\url{https://wty-ustc.github.io/inversion}.

中文翻译:

StyleGAN反转的简单基准

本文研究了StyleGAN倒置问题,该问题在使预训练的StyleGAN可以用于实际的面部图像编辑任务中起着至关重要的作用。这个问题对质量和效率有很高的要求。现有的基于优化的方法可以产生高质量的结果,但是优化通常需要很长时间。相反,基于正向的方法通常更快,但是其结果质量较差。在本文中,我们提出了一种用于StyleGAN求逆的新前馈网络,在效率和质量方面都有显着改善。在我们的反演网络中,我们引入:1)较浅的主干网,在各个规模上具有多个有效水头;2)多层身份丢失和多层人脸解析丢失到丢失功能;3)多阶段精炼。将这些设计组合在一起便形成了一种简单而有效的基线方法,该方法利用了基于优化和基于正向的方法的所有优点。定量和定性结果表明,我们的方法比现有的基于前向的方法表现更好,并且与基于最新优化的基于方法的方法相媲美,同时保持了基于前向的方法的高效率。此外,许多实际的图像编辑应用程序证明了我们方法的有效性。我们的项目页面为〜\ url {https://wty-ustc.github.io/inversion}。同时保持高效率以及基于前瞻性的方法。此外,许多实际的图像编辑应用程序证明了我们方法的有效性。我们的项目页面为〜\ url {https://wty-ustc.github.io/inversion}。同时保持高效率以及基于前瞻性的方法。此外,许多实际的图像编辑应用程序证明了我们方法的有效性。我们的项目页面为〜\ url {https://wty-ustc.github.io/inversion}。
更新日期:2021-04-16
down
wechat
bug