当前位置: X-MOL 学术Signal Process. Image Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised multi-domain image translation with domain representation learning
Signal Processing: Image Communication ( IF 3.5 ) Pub Date : 2021-08-30 , DOI: 10.1016/j.image.2021.116452
Huajun Liu 1 , Lei Chen 1 , Haigang Sui 2 , Qing Zhu 3 , Dian Lei 1 , Shubo Liu 1
Affiliation  

Recent years have witnessed tremendous improvements in multi-domain image-to-image translation. However, previous methods require multi-generator models or one-generator model with labeled datasets, which will increase the training cost and limit their further applications. In this paper, we propose an unsupervised network that consists of one pair of generator and discriminator as well as n encoders to achieve multi-domain image translation. Our work aims to learn the mappings of n domains simultaneously and automatically using images without any attribute labels. Besides, a representation loss is proposed to extract a proper representation vector for each domain, which would improve the performance of multi-domain image translation. Experiments have shown that our proposed method can perform better than the state-of-the-art multi-domain methods.



中文翻译:

具有域表示学习的无监督多域图像翻译

近年来,多领域图像到图像的翻译取得了巨大的进步。然而,以前的方法需要多生成器模型或带有标记数据集的单生成器模型,这将增加训练成本并限制其进一步应用。在本文中,我们提出了一个无监督网络,它由一对生成器和鉴别器以及n编码器实现多域图像翻译。我们的工作旨在学习映射n域同时自动使用没有任何属性标签的图像。此外,提出了表示损失来为每个域提取适当的表示向量,这将提高多域图像翻译的性能。实验表明,我们提出的方法可以比最先进的多域方法表现得更好。

更新日期:2021-09-06
down
wechat
bug