当前位置: X-MOL 学术J. Visual Commun. Image Represent. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
EdgeGAN: One-way mapping generative adversarial network based on the edge information for unpaired training set
Journal of Visual Communication and Image Representation ( IF 2.6 ) Pub Date : 2021-06-17 , DOI: 10.1016/j.jvcir.2021.103187
Yijie Li , Qiaokang Liang , Zhengwei Li , Youcheng Lei , Wei Sun , Yaonan Wang , Dan Zhang

Image conversion has attracted mounting attention due to its practical applications. This paper proposes a lightweight network structure that can implement unpaired training sets to complete one-way image mapping, based on the generative adversarial network (GAN) and a fixed-parameter edge detection convolution kernel. Compared with the cycle consistent adversarial network (CycleGAN), the proposed network features simpler structure, fewer parameters (only 37.48% of the parameters in CycleGAN), and less training cost (only 35.47% of the GPU memory usage and 17.67% of the single iteration time in CycleGAN). Remarkably, the cyclic consistency becomes not mandatory for ensuring the consistency of the content before and after image mapping. This network has achieved significant processing effects in some image translation tasks, and its effectiveness and validity have been well demonstrated through typical experiments. In the quantitative classification evaluation based on VGG-16, the algorithm proposed in this paper has achieved superior performance.



中文翻译:

EdgeGAN:基于未配对训练集边缘信息的单向映射生成对抗网络

图像转换因其实际应用而受到越来越多的关注。本文基于生成对抗网络(GAN)和固定参数边缘检测卷积核,提出了一种轻量级网络结构,可以实现不成对的训练集完成单向图像映射。与循环一致对抗网络(CycleGAN)相比,所提出的网络具有结构更简单、参数更少(仅为 CycleGAN 中参数的 37.48%)、训练成本更低(仅为 GPU 内存使用量的 35.47% 和单个CycleGAN 中的迭代时间)。值得注意的是,为了确保图像映射前后内容的一致性,循环一致性不再是强制性的。该网络在一些图像翻译任务中取得了显着的处理效果,其有效性和有效性已通过典型实验得到很好的证明。在基于VGG-16的定量分类评价中,本文提出的算法取得了优越的性能。

更新日期:2021-06-21
down
wechat
bug