当前位置: X-MOL 学术Signal Process. Image Commun. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint image-to-image translation with denoising using enhanced generative adversarial networks
Signal Processing: Image Communication ( IF 3.5 ) Pub Date : 2020-11-16 , DOI: 10.1016/j.image.2020.116072
Lan Yan , Wenbo Zheng , Fei-Yue Wang , Chao Gou

Impressive progress has been made recently in image-to-image translation using generative adversarial networks (GANs). However, existing methods often fail in translating source images with noise to target domain. To address this problem, we joint image-to-image translation with image denoising and propose an enhanced generative adversarial network (EGAN). In particular, built upon pix2pix, we introduce residual blocks in the generator network to capture deeper multi-level information between source and target image distribution. Moreover, a perceptual loss is proposed to enhance the performance of image-to-image translation. As demonstrated through extensive experiments, our proposed EGAN can alleviate effects of noise in source images, and outperform other state-of-the-art methods significantly. Furthermore, we experimentally indicate that the proposed EGAN is also effective when applied to image denoising.



中文翻译:

使用增强型生成对抗网络对图像进行图像去噪联合翻译

最近在使用生成对抗网络(GAN)进行图像到图像的翻译方面取得了令人印象深刻的进步。但是,现有方法通常无法将带有噪声的源图像转换到目标域。为了解决这个问题,我们将图像到图像的翻译与图像去噪相结合,并提出了一种增强的生成对抗网络(EGAN)。特别是,在pix2pix的基础上,我们在生成器网络中引入了剩余块,以捕获源图像和目标图像分布之间的更深层次的多层信息。此外,提出了感知损失以增强图像到图像翻译的性能。正如通过大量实验所证明的那样,我们提出的EGAN可以减轻源图像中噪声的影响,并显着优于其他最新方法。此外,

更新日期:2020-11-19
down
wechat
bug