当前位置: X-MOL 学术IEEE Trans. Instrum. Meas. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
UFA-FUSE: A Novel Deep Supervised and Hybrid Model for Multifocus Image Fusion
IEEE Transactions on Instrumentation and Measurement ( IF 5.6 ) Pub Date : 2021-04-09 , DOI: 10.1109/tim.2021.3072124
Yongsheng Zang 1 , Dongming Zhou 2 , Changcheng Wang 1 , Rencan Nie 1 , Yanbu Guo 1
Affiliation  

Traditional and deep learning-based fusion methods generated the intermediate decision map to obtain the fusion image through a series of postprocessing procedures. However, the fusion results generated by these methods are easy to lose some source image details or results in artifacts. Inspired by the image reconstruction techniques based on deep learning, we propose a multifocus image fusion network framework without any postprocessing to solve these problems in the end-to-end and supervised learning ways. To sufficiently train the fusion model, we have generated a large-scale multifocus image data set with ground-truth fusion images. What is more, to obtain a more informative fusion image, we further designed a novel fusion strategy based on unity fusion attention, which is composed of a channel attention module and a spatial attention module. Specifically, the proposed fusion approach mainly comprises three key components: feature extraction, feature fusion, and image reconstruction. We first utilize seven convolutional blocks to extract the image features from source images. Then, the extracted convolutional features are fused by the proposed fusion strategy in the feature fusion layer. Finally, the fused image features are reconstructed by four convolutional blocks. Experimental results demonstrate that the proposed approach for multifocus image fusion achieves remarkable fusion performance and superior time efficiency compared to 19 state-of-the-art fusion methods.

中文翻译:


UFA-FUSE:一种用于多焦点图像融合的新型深度监督混合模型



传统和基于深度学习的融合方法生成中间决策图,通过一系列后处理程序获得融合图像。然而,这些方法生成的融合结果很容易丢失一些源图像细节或导致伪影。受基于深度学习的图像重建技术的启发,我们提出了一种无需任何后处理的多焦点图像融合网络框架,以端到端和监督学习的方式解决这些问题。为了充分训练融合模型,我们生成了包含地面真实融合图像的大规模多焦点图像数据集。此外,为了获得信息更丰富的融合图像,我们进一步设计了一种基于统一融合注意力的新颖融合策略,该策略由通道注意力模块和空间注意力模块组成。具体来说,所提出的融合方法主要包括三个关键部分:特征提取、特征融合和图像重建。我们首先利用七个卷积块从源图像中提取图像特征。然后,提取的卷积特征在特征融合层中通过所提出的融合策略进行融合。最后,通过四个卷积块重建融合的图像特征。实验结果表明,与 19 种最先进的融合方法相比,所提出的多焦点图像融合方法实现了卓越的融合性能和卓越的时间效率。
更新日期:2021-04-09
down
wechat
bug