当前位置: X-MOL 学术arXiv.cs.MM › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Self-Supervised Sketch-to-Image Synthesis
arXiv - CS - Multimedia Pub Date : 2020-12-16 , DOI: arxiv-2012.09290
Bingchen Liu, Yizhe Zhu, Kunpeng Song, Ahmed Elgammal

Imagining a colored realistic image from an arbitrarily drawn sketch is one of the human capabilities that we eager machines to mimic. Unlike previous methods that either requires the sketch-image pairs or utilize low-quantity detected edges as sketches, we study the exemplar-based sketch-to-image (s2i) synthesis task in a self-supervised learning manner, eliminating the necessity of the paired sketch data. To this end, we first propose an unsupervised method to efficiently synthesize line-sketches for general RGB-only datasets. With the synthetic paired-data, we then present a self-supervised Auto-Encoder (AE) to decouple the content/style features from sketches and RGB-images, and synthesize images that are both content-faithful to the sketches and style-consistent to the RGB-images. While prior works employ either the cycle-consistence loss or dedicated attentional modules to enforce the content/style fidelity, we show AE's superior performance with pure self-supervisions. To further improve the synthesis quality in high resolution, we also leverage an adversarial network to refine the details of synthetic images. Extensive experiments on 1024*1024 resolution demonstrate a new state-of-art-art performance of the proposed model on CelebA-HQ and Wiki-Art datasets. Moreover, with the proposed sketch generator, the model shows a promising performance on style mixing and style transfer, which require synthesized images to be both style-consistent and semantically meaningful. Our code is available on https://github.com/odegeasslbc/Self-Supervised-Sketch-to-Image-Synthesis-PyTorch, and please visit https://create.playform.io/my-projects?mode=sketch for an online demo of our model.

中文翻译:

自我监督的草图到图像合成

从任意绘制的草图中想象出彩色逼真的图像是我们渴望机器模仿的人类能力之一。与以前的需要素描图像对或利用检测到的低数量边缘作为素描的方法不同,我们以自我监督的学习方式研究了基于示例的素描到图像(s2i)合成任务,从而消除了配对的草图数据。为此,我们首先提出了一种无监督方法,可以有效地合成仅用于常规RGB的数据集的线草图。利用合成的配对数据,然后我们提供一个自我监督的自动编码器(AE),以将内容/样式特征与草图和RGB图像分离,并合成内容忠实于草图且样式一致的图像到RGB图像。尽管先前的作品使用了周期一致性损失或专用的注意模块来增强内容/样式的保真度,但我们通过纯自我监督展示了AE的卓越性能。为了进一步提高高分辨率的合成质量,我们还利用对抗网络来细化合成图像的细节。在1024 * 1024分辨率上进行的广泛实验证明了该模型在CelebA-HQ和Wiki-Art数据集上的最新技术性能。此外,使用提出的草图生成器,该模型在样式混合和样式转移方面显示出令人鼓舞的性能,这要求合成图像既具有样式一致性又具有语义上的意义。我们的代码可在https://github.com/odegeasslbc/Self-Supervised-Sketch-to-Image-Synthesis-PyTorch上找到,请访问https://create.playform。
更新日期:2020-12-18
down
wechat
bug