当前位置: X-MOL 学术IEEE Trans. Image Process. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Global-Feature Encoding U-Net (GEU-Net) for Multi-Focus Image Fusion
IEEE Transactions on Image Processing ( IF 10.6 ) Pub Date : 2020-10-28 , DOI: 10.1109/tip.2020.3033158
Bin Xiao , Bocheng Xu , Xiuli Bi , Weisheng Li

The convolutional neural network (CNN)-based multi-focus image fusion methods which learn the focus map from the source images have greatly enhanced fusion performance compared with the traditional methods. However, these methods have not yet reached a satisfactory fusion result, since the convolution operation pays too much attention on the local region and generating the focus map as a local classification (classify each pixel into focus or de-focus classes) problem. In this article, a global-feature encoding U-Net (GEU-Net) is proposed for multi-focus image fusion. In the proposed GEU-Net, the U-Net network is employed for treating the generation of focus map as a global two-class segmentation task, which segments the focused and defocused regions from a global view. For improving the global feature encoding capabilities of U-Net, the global feature pyramid extraction module (GFPE) and global attention connection upsample module (GACU) are introduced to effectively extract and utilize the global semantic and edge information. The perceptual loss is added to the loss function, and a large-scale dataset is constructed for boosting the performance of GEU-Net. Experimental results show that the proposed GEU-Net can achieve superior fusion performance than some state-of-the-art methods in both human visual quality, objective assessment and network complexity.

中文翻译:

用于多焦点图像融合的全局特征编码U-Net(GEU-Net)

与传统方法相比,基于卷积神经网络(CNN)的多焦点图像融合方法可以从源图像中学习焦点图,极大地增强了融合性能。但是,由于卷积运算对局部区域的关注过多,并且将聚焦图生成为局部分类(将每个像素分类为聚焦或散焦类)问题,因此这些方法尚未达到令人满意的融合结果。在本文中,提出了一种用于多焦点图像融合的全局特征编码U-Net(GEU-Net)。在拟议的GEU-Net中,使用U-Net网络将焦点图的生成视为全局的两类分割任务,该任务从全局角度分割了聚焦区域和散焦区域。为了提高U-Net的全局特征编码功能,引入了全局特征金字塔提取模块(GFPE)和全局关注连接上采样模块(GACU),以有效地提取和利用全局语义和边缘信息。将感知损失添加到损失函数中,并构建了大规模数据集以提高GEU-Net的性能。实验结果表明,在人的视觉质量,客观评估和网络复杂性方面,所提出的GEU-Net可以实现比某些最新方法更好的融合性能。
更新日期:2020-11-21
down
wechat
bug