当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
RFN-Nest: An end-to-end residual fusion network for infrared and visible images
Information Fusion ( IF 18.6 ) Pub Date : 2021-03-01 , DOI: 10.1016/j.inffus.2021.02.023
Hui Li , Xiao-Jun Wu , Josef Kittler

In the image fusion field, the design of deep learning-based fusion methods is far from routine. It is invariably fusion-task specific and requires a careful consideration. The most difficult part of the design is to choose an appropriate strategy to generate the fused image for a specific task in hand. Thus, devising learnable fusion strategy is a very challenging problem in the community of image fusion. To address this problem, a novel end-to-end fusion network architecture (RFN-Nest) is developed for infrared and visible image fusion. We propose a residual fusion network (RFN) which is based on a residual architecture to replace the traditional fusion approach. A novel detail-preserving loss function, and a feature enhancing loss function are proposed to train RFN. The fusion model learning is accomplished by a novel two-stage training strategy. In the first stage, we train an auto-encoder based on an innovative nest connection (Nest) concept. Next, the RFN is trained using the proposed loss functions. The experimental results on public domain data sets show that, compared with the existing methods, our end-to-end fusion network delivers a better performance than the state-of-the-art methods in both subjective and objective evaluation. The code of our fusion method is available at https://github.com/hli1221/imagefusion-rfn-nest.



中文翻译:

RFN-Nest:用于红外和可见图像的端到端残差融合网络

在图像融合领域,基于深度学习的融合方法的设计远非常规。它始终是特定于融合任务的,因此需要仔细考虑。设计中最困难的部分是选择适当的策略来为手头的特定任务生成融合图像。因此,在图像融合领域,设计可学习的融合策略是一个非常具有挑战性的问题。为了解决这个问题,开发了一种新颖的端到端融合网络体系结构(RFN-Nest),用于红外和可见图像融合。我们提出了一种残差融合网络(RFN),该网络基于残差架构来替代传统的融合方法。提出了一种新颖的细节保留损失函数和特征增强损失函数来训练RFN。融合模型学习是通过一种新颖的两阶段训练策略来完成的。在第一阶段,我们基于创新的嵌套连接(Nest)概念训练自动编码器。接下来,使用建议的损失函数训练RFN。在公共领域数据集上的实验结果表明,与现有方法相比,我们的端到端融合网络在主观评估和客观评估方面均比最新方法具有更好的性能。我们的融合方法的代码可从https://github.com/hli1221/imagefusion-rfn-nest获得。在主观和客观评估方面,我们的端到端融合网络提供的性能均优于最新方法。我们的融合方法的代码可从https://github.com/hli1221/imagefusion-rfn-nest获得。在主观和客观评估方面,我们的端到端融合网络提供的性能均优于最新技术。我们的融合方法的代码可从https://github.com/hli1221/imagefusion-rfn-nest获得。

更新日期:2021-03-15
down
wechat
bug