当前位置: X-MOL 学术IEEE Trans. Comput. Imaging › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive near-infrared and visible fusion for fast image enhancement
IEEE Transactions on Computational Imaging ( IF 4.2 ) Pub Date : 2020-01-01 , DOI: 10.1109/tci.2019.2956873
Mohamed Awad 1 , Ahmed Elliethy 1 , Hussein A. Aly 1
Affiliation  

Near-infrared (NIR) band sensors capture digital images of scenes under special conditions such as haze, fog, overwhelming light or mist, where visible (VS) band sensors get occluded. However, the NIR images contain poor textures and colors of different objects in the scene, on the contrary to the VS images. In this article, we propose a simple yet effective fusion approach that combines both VS and NIR images to produce an enhanced fused image that contains better scene details and similar colors to the VS image. The proposed approach first estimates a fusion map from the relative difference of local contrasts of the VS and NIR images. Then, the approach extracts non-spectral spatial details from the NIR image and finally, the extracted details are weighted according to the fusion map and injected into the VS image to produce the enhanced fused image. The proposed approach adaptively transfers the useful details from the NIR image that contributes to the enhancement of the fused image. It produces realistic fused images by preserving the colors of the VS image and constitutes simple and non-iterative calculations with $\mathcal {O}(n)$ complexity. The effectiveness of the proposed approach is experimentally verified by comparisons to four different state-of-the-art VS-NIR fusion approaches in terms of computational complexity and quality of the obtained enhanced fused images. The quality is evaluated using two-color distortion measures and a novel aggregation of several blind image quality assessment measures. The proposed approach shows superior performance as it produces enhanced fused images and preserves the quality even when the NIR images suffer from loss of texture or blurriness degradations, with acceptable fast execution time. Source code of the proposed approach is available online.

中文翻译:

用于快速图像增强的自适应近红外和可见光融合

近红外 (NIR) 波段传感器在特殊条件下捕获场景的数字图像,例如雾霾、雾、强光或薄雾,其中可见 (VS) 波段传感器被遮挡。然而,与 VS 图像相反,NIR 图像包含场景中不同物体的较差纹理和颜色。在本文中,我们提出了一种简单而有效的融合方法,该方法结合了 VS 和 NIR 图像以生成增强的融合图像,该图像包含更好的场景细节和与 VS 图像相似的颜色。所提出的方法首先根据 VS 和 NIR 图像的局部对比度的相对差异来估计融合图。然后,该方法从 NIR 图像中提取非光谱空间细节,最后,根据融合图对提取的细节进行加权,并注入到VS图像中以产生增强的融合图像。所提出的方法自适应地从 NIR 图像中传输有用的细节,这有助于增强融合图像。它通过保留 VS 图像的颜色来生成逼真的融合图像,并构成具有 $\mathcal {O}(n)$ 复杂度的简单且非迭代的计算。通过与四种不同的最先进的 VS-NIR 融合方法在计算复杂度和获得的增强融合图像的质量方面进行比较,实验验证了所提出方法的有效性。使用双色失真测量和几种盲图像质量评估测量的新聚合来评估质量。所提出的方法显示出优越的性能,因为它产生增强的融合图像,即使在 NIR 图像遭受纹理丢失或模糊度下降时也能保持质量,并且执行时间可以接受。所提议方法的源代码可在线获得。
更新日期:2020-01-01
down
wechat
bug