当前位置: X-MOL 学术Concurr. Comput. Pract. Exp. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multiscale channel attention network for infrared and visible image fusion
Concurrency and Computation: Practice and Experience ( IF 1.5 ) Pub Date : 2020-12-23 , DOI: 10.1002/cpe.6155
Jiahui Zhu 1 , Qingyu Dou 2 , Lihua Jian 3 , Kai Liu 4 , Farhan Hussain 5 , Xiaomin Yang 1
Affiliation  

Imaging systems with different imaging sensors are widely applied to surveillance field, military field, and medicine field. Particularly, infrared imaging sensors can acquire thermal radiations emitted by different objects but lack textural details, and visible imaging sensors can capture abundant textural information but suffer from loss of scene information under poor weather conditions. The fusion of infrared and visible images can synthesize a new image with complementary information of the source images. In this paper, we present a deep learning method with encoder–decoder architecture for infrared and visible image fusion. Firstly, multiscale channel attention blocks are introduced to extract features at different scales, which can preserve more meaningful information and enhance the important information. Secondly, we utilize the improved fusion strategy based on visual saliency to fuse feature maps. Lastly, the fusion result is restored via reconstruction network. In comparison with other state-of-the-art approaches, our experimental results achieve appealing performance on visual effects and objective assessments.

中文翻译:

用于红外和可见光图像融合的多尺度通道注意网络

具有不同成像传感器的成像系统广泛应用于监视领域、军事领域和医学领域。特别是红外成像传感器可以获取不同物体发出的热辐射但缺乏纹理细节,可见光成像传感器可以捕获丰富的纹理信息,但在恶劣的天气条件下会丢失场景信息。红外和可见光图像的融合可以合成具有源图像互补信息的新图像。在本文中,我们提出了一种用于红外和可见光图像融合的编码器-解码器架构的深度学习方法。首先,引入多尺度通道注意力块来提取不同尺度的特征,可以保留更有意义的信息并增强重要信息。其次,我们利用基于视觉显着性的改进融合策略来融合特征图。最后,融合结果通过重建网络恢复。与其他最先进的方法相比,我们的实验结果在视觉效果和客观评估方面取得了吸引人的表现。
更新日期:2020-12-23
down
wechat
bug