当前位置: X-MOL 学术Pattern Recogn. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Exploring a Unified Low Rank Representation for Multi-focus Image Fusion
Pattern Recognition ( IF 8 ) Pub Date : 2020-11-01 , DOI: 10.1016/j.patcog.2020.107752
Qiang Zhang , Fan Wang , Yongjiang Luo , Jungong Han

Abstract Recent years have witnessed a trend that uses image representation models, including sparse representation (SR), low-rank representation (LRR) and their variants for multi-focus image fusion. Despite the thrilling preliminary results, existing methods conduct the fusion patch by patch, leading to insufficient consideration of the spatial consistency among the image patches within a local region or an object. As a result, not only the spatial artifacts are easily introduced to the fused image but also the “jagged” artifacts frequently arise on the boundaries between the focused regions and the de-focused regions, which is an inherent problem in these patch-based fusion methods.Aiming to address the above problems, we propose, in this paper,a new multi-focus image fusion method integrating super-pixel clustering and a unified LRR (ULRR) model. The entire algorithm is carried out in three steps. In the first step, the source image is segmented into a few super-pixels with irregular sizes, rather than patches with regular sizes, to diminish the “jagged” artifacts and meanwhile to preserve the boundaries of objects on the fused image. Secondly, a super-pixel clustering-based fusion strategy is employed to further reduce the spatial artifacts in the fused images. This is achieved by using a proposed ULRR model, which imposes the low-rank constraints onto each super-pixel cluster.Thisis apparently more reasonable for those images with complicated scenes. Moreover, a Laplacianregularization term is incorporated in the proposed ULRR model to ensure the spatial consistency among the super-pixels with the same cluster. Finally, a measure of focus for each super-pixel is defined to seek the focused as well as de-focused regions in thesource image via jointly using representation coefficients and sparse errors derived from the proposed ULRR model. Extensive experiments have been conducted and the results demonstrate the superiorities of the proposed fusion method in diminishing the spatial artifactsin the fused image and the “jagged” boundary artifacts between the focused and de-focused regions, compared to the state-of-the-art fusion algorithms.

中文翻译:

探索多焦点图像融合的统一低秩表示

摘要 近年来,出现了使用图像表示模型的趋势,包括稀疏表示(SR)、低秩表示(LRR)及其变体用于多焦点图像融合。尽管初步结果令人兴奋,但现有方法逐块进行融合,导致对局部区域或对象内图像块之间的空间一致性考虑不足。结果,不仅空间伪影很容易被引入融合图像,而且在聚焦区域和散焦区域之间的边界上经常出现“锯齿状”伪影,这是这些基于补丁的融合中的固有问题方法。针对上述问题,我们在本文中提出,一种新的多焦点图像融合方法,融合了超像素聚类和统一 LRR (ULRR) 模型。整个算法分三步进行。第一步,将源图像分割成几个大小不规则的超像素,而不是大小规则的块,以减少“锯齿状”伪影,同时保留融合图像上对象的边界。其次,采用基于超像素聚类的融合策略来进一步减少融合图像中的空间伪影。这是通过使用提出的 ULRR 模型实现的,该模型对每个超像素集群施加低秩约束。对于那些场景复杂的图像,这显然更合理。而且,提出的 ULRR 模型中加入了拉普拉斯正则化项,以确保具有相同簇的超像素之间的空间一致性。最后,定义了每个超像素的聚焦度量,以通过联合使用从所提出的 ULRR 模型导出的表示系数和稀疏误差来寻找源图像中的聚焦区域和散焦区域。已经进行了大量实验,结果证明了所提出的融合方法在减少融合图像中的空间伪影和聚焦和散焦区域之间的“锯齿状”边界伪影方面的优越性,与最先进的技术相比融合算法。每个超像素的焦点度量被定义为通过联合使用从所提出的 ULRR 模型导出的表示系数和稀疏误差来寻找源图像中的聚焦和散焦区域。已经进行了大量实验,结果证明了所提出的融合方法在减少融合图像中的空间伪影和聚焦和散焦区域之间的“锯齿状”边界伪影方面的优越性,与最先进的技术相比融合算法。每个超像素的焦点度量被定义为通过联合使用从所提出的 ULRR 模型导出的表示系数和稀疏误差来寻找源图像中的聚焦和散焦区域。已经进行了大量实验,结果证明了所提出的融合方法在减少融合图像中的空间伪影和聚焦和散焦区域之间的“锯齿状”边界伪影方面的优越性,与最先进的技术相比融合算法。
更新日期:2020-11-01
down
wechat
bug