当前位置: X-MOL 学术Front. Comput. Sci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unpaired image to image transformation via informative coupled generative adversarial networks
Frontiers of Computer Science ( IF 4.2 ) Pub Date : 2021-04-16 , DOI: 10.1007/s11704-020-9002-7
Hongwei Ge , Yuxuan Han , Wenjing Kang , Liang Sun

We consider image transformation problems, and the objective is to translate images from a source domain to a target one. The problem is challenging since it is difficult to preserve the key properties of the source images, and to make the details of target being as distinguishable as possible. To solve this problem, we propose an informative coupled generative adversarial networks (ICoGAN). For each domain, an adversarial generator-and-discriminator network is constructed. Basically, we make an approximately-shared latent space assumption by a mutual information mechanism, which enables the algorithm to learn representations of both domains in unsupervised setting, and to transform the key properties of images from source to target. Moreover, to further enhance the performance, a weight-sharing constraint between two subnetworks, and different level perceptual losses extracted from the intermediate layers of the networks are combined. With quantitative and visual results presented on the tasks of edge to photo transformation, face attribute transfer, and image inpainting, we demonstrate the ICo-GAN’s effectiveness, as compared with other state-of-the-art algorithms.



中文翻译:

通过信息耦合生成对抗网络将未配对的图像转换为图像

我们考虑图像转换问题,目的是将图像从源域转换为目标域。由于很难保留源图像的关键属性并使目标的细节尽可能地可区分,因此该问题具有挑战性。为了解决这个问题,我们提出了一种信息耦合的生成对抗网络(ICoGAN)。对于每个领域,构建对抗性生成器和区分器网络。基本上,我们通过互信息机制进行近似共享的潜在空间假设,这使该算法能够在无监督的情况下学习两个域的表示,并将图像的关键属性从源转换为目标。此外,为了进一步提高性能,两个子网之间的权重共享约束 从网络中间层提取的不同级别的感知损失被组合在一起。通过对边缘到照片的转换,面部属性传递和图像修复等任务提出的定量和视觉结果,我们证明了ICo-GAN与其他最新算法相比的有效性。

更新日期:2021-04-16
down
wechat
bug