当前位置: X-MOL 学术Appl. Soft Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A novel multi-focus image fusion by combining simplified very deep convolutional networks and patch-based sequential reconstruction strategy
Applied Soft Computing ( IF 8.7 ) Pub Date : 2020-03-20 , DOI: 10.1016/j.asoc.2020.106253
Chang Wang , Zongya Zhao , Qiongqiong Ren , Yongtao Xu , Yi Yu

Multi-focus image fusion is an important approach to obtain the composite image with all objects in focus, and it can be treated as an image segmentation problem, which is solved by convolutional neural networks (CNN). For CNN-based multi-focus image fusion methods, public training dataset does not exist, and the network model determines the recognition accuracy of the focused and defocused pixels. Considering these problems, we proposed a novel CNN-based multi-focus image fusion method by combining simplified very deep convolutional networks and patch-based sequential reconstruction strategy in this study. Firstly, the defocused images with five blurred levels were simulated by the Gaussian filter, and a novel training dataset was constructed for multi-focus image fusion. Secondly, the very deep convolutional networks model was simplified to design a Siamese CNN model, and this model was used to recognize the focused and defocused pixels. Thirdly, the focused and defocused regions were detected by the patch-based sequential reconstruction strategy, and the final decision map was refined by the morphological operator. Finally, the multi-focus image fusion was performed. Lytro dataset as a public multi-focus image dataset was used to prove the validation of the proposed method. Information entropy, mutual information, universal image quality index, visual information fidelity, and edge retention were adopted as evaluation metrics, and the proposed method was compared with state-of-the-art methods. Experimental results demonstrated that the proposed method can achieve state-of-the-art fusion results in terms of visual quality and objective assessment.



中文翻译:

结合简化的非常深的卷积网络和基于补丁的顺序重建策略的新型多焦点图像融合

多焦点图像融合是获取所有物体都处于聚焦状态的合成图像的一种重要方法,可以将其视为图像分割问题,可以通过卷积神经网络(CNN)解决。对于基于CNN的多焦点图像融合方法,不存在公共训练数据集,并且网络模型确定聚焦和散焦像素的识别精度。考虑到这些问题,我们结合简化的非常深的卷积网络和基于补丁的顺序重构策略,提出了一种基于CNN的多焦点图像融合方法。首先,利用高斯滤波器对五种模糊水平的散焦图像进行仿真,建立了新的训练数据集进行多焦点图像融合。其次,简化了非常深的卷积网络模型,以设计暹罗CNN模型,并使用该模型来识别聚焦和散焦像素。第三,通过基于补丁的顺序重构策略检测聚焦区域和散焦区域,并通过形态算子对最终决策图进行细化。最后,进行多焦点图像融合。Lytro数据集作为公共多焦点图像数据集用于证明该方法的有效性。将信息熵,互信息,通用图像质量指标,视觉信息保真度和边缘保留度用作评估指标,并将所提出的方法与最新方法进行比较。

更新日期:2020-03-20
down
wechat
bug