当前位置: X-MOL 学术Inform. Fusion › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An infrared and visible image fusion method based on multi-scale transformation and norm optimization
Information Fusion ( IF 18.6 ) Pub Date : 2021-02-09 , DOI: 10.1016/j.inffus.2021.02.008
Guofa Li , Yongjie Lin , Xingda Qu

In this paper, we propose a new infrared and visible image fusion method based on multi-scale transformation and norm optimization. In this method, a new loss function is designed with contrast fidelity (L2 norm) and sparse constraint (L1 norm), and the split Bregman method is used to optimize the loss function to obtain pre-fusion images. The final fused base layer is obtained by using a multi-level decomposition latent low-rank representation (MDLatLRR) method to decompose the pre-fusion images. Then, using the pre-fusion image as the reference, image structure similarity (SSIM) is introduced to evaluate the validity of detail information from the visible image, and the SSIM is then transformed into a weight map which is applied to the optimization method based on L2 norm to generate the final detail fusion layer. Our proposed method is evaluated and compared with 18 state-of-the-art image fusion methods, both qualitatively and quantitatively on four public datasets (i.e., CVC14 driving dataset, TNO dataset with natural scenarios, RoadScene dataset, and whole brain atlas dataset). The results show that our proposed method is generally better than the compared methods in terms of highlighting targets and retaining effective detail information.



中文翻译:

基于多尺度变换和规范优化的红外与可见光图像融合方法

本文提出了一种基于多尺度变换和规范优化的红外与可见光图像融合新方法。在该方法中,设计了具有对比度保真度(L2范数)和稀疏约束(L1范数)的新损失函数,并使用分裂Bregman方法优化损失函数以获得融合前图像。最终的融合基础层是通过使用多级分解潜在低秩表示(MDLatLRR)方法分解融合前图像获得的。然后,以融合前图像为参考,引入图像结构相似度(SSIM)评估可见图像中细节信息的有效性,然后将SSIM转换为权重图,该权重图应用于基于根据L2规范生成最终的细节融合层。在四个公共数据集(即CVC14驱动数据集,具有自然场景的TNO数据集,RoadScene数据集和全脑图集数据集)上,从质量和数量上对我们提出的方法进行了评估,并与18种最新的图像融合方法进行了比较。 。结果表明,我们提出的方法在突出目标和保留有效细节信息方面总体上优于比较方法。

更新日期:2021-02-09
down
wechat
bug