当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
GSA-GAN: Global Spatial Attention Generative Adversarial Networks
Neurocomputing ( IF 5.5 ) Pub Date : 2021-01-18 , DOI: 10.1016/j.neucom.2021.01.047
Lei An , Jiajia Zhao , Bo Ma

This paper proposes a solution to translating the visible images into infrared images, which is challenging in computer vision. Our solution belongs to unsupervised learning, which has recently become popular in image-to-image translation. However, existing methods do not produce satisfactory results because (1) most existing methods are mainly used in entertainment scenarios with single scenes and low complexity. The problem solved by this article is more diverse and more complicated. (2) The infrared response of objects depends not only on itself but also on the current environment, and existing methods cannot correlate long-range dependent objects. In this paper, We propose Global Spatial Attention(GSA), which enhances dependence between long-range objects and improves the synthesized image quality. Compared with other methods, GSA can save more space and time. Moreover, we introduce the idea of subspace learning into the neural network to make training more stable. Our method takes unpaired visible images and infrared images for training, which are easy to collect. Experimental results show that our method can generate high-quality infrared images from visible images and outperforms state-of-the-art methods.



中文翻译:

GSA-GAN:全球空间注意力生成对抗网络

本文提出了一种将可见图像转换为红外图像的解决方案,这在计算机视觉中具有挑战性。我们的解决方案属于无监督学习,最近在图像到图像的翻译中变得很流行。然而,现有方法不能产生令人满意的结果,因为(1)大多数现有方法主要用于具有单一场景和低复杂度的娱乐场景中。本文解决的问题更加多样化和复杂。(2)物体的红外响应不仅取决于自身,还取决于当前环境,并且现有方法无法关联远距离物体。在本文中,我们提出了全局空间注意力(Global Spatial Attention,GSA),它可以增强远距离物体之间的依赖性并提高合成图像的质量。与其他方法相比,GSA可以节省更多空间和时间。此外,我们将子空间学习的思想引入神经网络,以使训练更加稳定。我们的方法采用不成对的可见图像和红外图像进行训练,这些易于收集。实验结果表明,我们的方法可以从可见图像生成高质量的红外图像,并且优于最新方法。

更新日期:2021-01-19
down
wechat
bug