当前位置: X-MOL 学术J. Intell. Fuzzy Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Colorization of fusion image of infrared and visible images based on parallel generative adversarial network approach
Journal of Intelligent & Fuzzy Systems ( IF 1.7 ) Pub Date : 2021-06-16 , DOI: 10.3233/jifs-210987
Lei Chen 1 , Jun Han 1 , Feng Tian 2
Affiliation  

Fusing the infrared (IR) and visible images has many advantages and can be applied to applications such as target detection and recognition. Colors can give more accurate and distinct features, but the low resolution and low contrast of fused images make this a challenge task. In this paper, we proposed a method based on parallel generative adversarial networks (GANs) to address the challenge. We used IR image, visible image and fusion image as ground truth of ‘L’, ‘a’ and ‘b’ of the Lab model. Through the parallel GANs, we can gain the Lab data which can be converted to RGB image. We adopt TNO and RoadScene data sets to verify our method, and compare with five objective evaluation parameters obtained by other three methods based on deep learning (DL). It is demonstrated that the proposed approach is able to achieve better performance against state-of-arts methods.

中文翻译:

基于并行生成对抗网络方法的红外与可见光融合图像着色

融合红外 (IR) 和可见光图像具有许多优点,可应用于目标检测和识别等应用。颜色可以提供更准确、更清晰的特征,但融合图像的低分辨率和低对比度使其成为一项具有挑战性的任务。在本文中,我们提出了一种基于并行生成对抗网络 (GAN) 的方法来应对这一挑战。我们使用红外图像、可见光图像和融合图像作为 Lab 模型的“L”、“a”和“b”的真实值。通过并行 GAN,我们可以获得可以转换为 RGB 图像的 Lab 数据。我们采用 TNO 和 RoadScene 数据集来验证我们的方法,并与基于深度学习(DL)的其他三种方法获得的五个客观评估参数进行比较。
更新日期:2021-06-18
down
wechat
bug