当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
SDP-GAN: Saliency Detail Preservation Generative Adversarial Networks for High Perceptual Quality Style Transfer
IEEE Transactions on Image Processing ( IF 10.8 ) Pub Date : 2020-11-13 , DOI: 10.1109/tip.2020.3036754
Ru Li , Chi-Hao Wu , Shuaicheng Liu , Jue Wang , Guangfu Wang , Guanghui Liu , Bing Zeng

The paper proposes a solution to effectively handle salient regions for style transfer between unpaired datasets. Recently, Generative Adversarial Networks (GAN) have demonstrated their potentials of translating images from source domain X{X} to target domain Y{Y} in the absence of paired examples. However, such a translation cannot guarantee to generate high perceptual quality results. Existing style transfer methods work well with relatively uniform content, they often fail to capture geometric or structural patterns that always belong to salient regions. Detail losses in structured regions and undesired artifacts in smooth regions are unavoidable even if each individual region is correctly transferred into the target style. In this paper, we propose SDP-GAN, a GAN-based network for solving such problems while generating enjoyable style transfer results. We introduce a saliency network, which is trained with the generator simultaneously. The saliency network has two functions: (1) providing constraints for content loss to increase punishment for salient regions, and (2) supplying saliency features to generator to produce coherent results. Moreover, two novel losses are proposed to optimize the generator and saliency networks. The proposed method preserves the details on important salient regions and improves the total image perceptual quality. Qualitative and quantitative comparisons against several leading prior methods demonstrates the superiority of our method.

中文翻译:


SDP-GAN:用于高感知质量风格迁移的显着性细节保留生成对抗网络



本文提出了一种有效处理未配对数据集之间风格迁移的显着区域的解决方案。最近,生成对抗网络(GAN)展示了在没有配对示例的情况下将图像从源域 X{X} 转换为目标域 Y{Y} 的潜力。然而,这样的翻译不能保证产生高感知质量的结果。现有的风格迁移方法适用于相对统一的内容,但它们通常无法捕获始终属于显着区域的几何或结构模式。即使每个单独的区域都正确地转换为目标样式,结构化区域中的细节丢失和平滑区域中的不良伪影也是不可避免的。在本文中,我们提出了 SDP-GAN,这是一种基于 GAN 的网络,用于解决此类问题,同时生成令人愉快的风格迁移结果。我们引入了一个显着性网络,它与生成器同时进行训练。显着性网络具有两个功能:(1)为内容丢失提供约束,以增加对显着区域的惩罚;(2)向生成器提供显着性特征以产生连贯的结果。此外,提出了两种新颖的损失来优化生成器和显着性网络。所提出的方法保留了重要显着区域的细节并提高了整体图像感知质量。与几种领先的现有方法的定性和定量比较证明了我们方法的优越性。
更新日期:2020-11-13
down
wechat
bug