当前位置: X-MOL 学术Mach. Vis. Appl. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Joint patch clustering-based adaptive dictionary and sparse representation for multi-modality image fusion
Machine Vision and Applications ( IF 3.3 ) Pub Date : 2022-07-21 , DOI: 10.1007/s00138-022-01322-w
Chang Wang , Yang Wu , Yi Yu , Jun Qiang Zhao

For the image fusion method using sparse representation, the adaptive dictionary and fusion rule have a great influence on the multi-modality image fusion, and the maximum \(L_{1}\) norm fusion rule may cause gray inconsistency in the fusion result. In order to solve this problem, we proposed an improved multi-modality image fusion method by combining the joint patch clustering-based adaptive dictionary and sparse representation in this study. First, we used a Gaussian filter to separate the high- and low-frequency information. Second, we adopted the local energy-weighted strategy to complete the low-frequency fusion. Third, we used the joint patch clustering algorithm to reconstruct an over-complete adaptive learning dictionary, designed a hybrid fusion rule depending on the similarity of multi-norm of sparse representation coefficients, and completed the high-frequency fusion. Last, we obtained the fusion result by transforming the frequency domain into the spatial domain. We adopted the fusion metrics to evaluate the fusion results quantitatively and proved the superiority of the proposed method by comparing the state-of-the-art image fusion methods. The results showed that this method has the highest fusion metrics in average gradient, general image quality, and edge preservation. The results also showed that this method has the best performance in subjective vision. We demonstrated that this method has strong robustness by analyzing the parameter’s influence on the fusion result and consuming time. We extended this method to the infrared and visible image fusion and multi-focus image fusion perfectly. In summary, this method has the advantages of good robustness and wide application.



中文翻译:

基于联合补丁聚类的自适应字典和稀疏表示的多模态图像融合

对于使用稀疏表示的图像融合方法,自适应字典和融合规则对多模态图像融合有很大的影响,最大\(L_{1}\)范数融合规则可能会导致融合结果的灰度不一致。为了解决这个问题,我们提出了一种改进的多模态图像融合方法,将基于联合补丁聚类的自适应字典和稀疏表示相结合。首先,我们使用高斯滤波器来分离高频和低频信息。其次,我们采用局部能量加权策略完成低频融合。第三,利用联合补丁聚类算法重构过完备自适应学习字典,根据稀疏表示系数多范数的相似性设计混合融合规则,完成高频融合。最后,我们通过将频域转换为空间域来获得融合结果。我们采用融合指标对融合结果进行定量评估,并通过比较最先进的图像融合方法证明了所提出方法的优越性。结果表明,该方法在平均梯度、一般图像质量和边缘保留方面具有最高的融合指标。结果还表明,该方法在主观视觉方面的表现最好。我们通过分析参数对融合结果的影响和耗时的分析证明了该方法具有很强的鲁棒性。我们将该方法完美地扩展到红外和可见光图像融合以及多焦点图像融合。综上所述,该方法具有鲁棒性好、应用广泛的优点。结果表明,该方法在平均梯度、一般图像质量和边缘保留方面具有最高的融合指标。结果还表明,该方法在主观视觉方面的表现最好。我们通过分析参数对融合结果的影响和耗时的分析证明了该方法具有很强的鲁棒性。我们将该方法完美地扩展到红外和可见光图像融合以及多焦点图像融合。综上所述,该方法具有鲁棒性好、应用广泛的优点。结果表明,该方法在平均梯度、一般图像质量和边缘保留方面具有最高的融合指标。结果还表明,该方法在主观视觉方面的表现最好。我们通过分析参数对融合结果的影响和耗时的分析证明了该方法具有很强的鲁棒性。我们将该方法完美地扩展到红外和可见光图像融合以及多焦点图像融合。综上所述,该方法具有鲁棒性好、应用广泛的优点。我们通过分析参数对融合结果的影响和耗时的分析证明了该方法具有很强的鲁棒性。我们将该方法完美地扩展到红外和可见光图像融合以及多焦点图像融合。综上所述,该方法具有鲁棒性好、应用广泛的优点。我们通过分析参数对融合结果的影响和耗时的分析证明了该方法具有很强的鲁棒性。我们将该方法完美地扩展到红外和可见光图像融合以及多焦点图像融合。综上所述,该方法具有鲁棒性好、应用广泛的优点。

更新日期:2022-07-23
down
wechat
bug