当前位置: X-MOL 学术IEEE Trans. Geosci. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hyperspectral and Multispectral Image Fusion Via Self-Supervised Loss and Separable Loss
IEEE Transactions on Geoscience and Remote Sensing ( IF 8.2 ) Pub Date : 2022-09-15 , DOI: 10.1109/tgrs.2022.3204769
Huiling Gao 1 , Shutao Li 1 , Renwei Dian 2
Affiliation  

Fusion of hyperspectral images (HSIs) with low-spatial and high-spectral resolution and multispectral images (MSIs) with high-spatial and low-spectral resolution is an important method to improve spatial resolution. The existing deep-learning-based image fusion technologies usually neglect the ability of neural networks to understand differential features. In addition, the loss constraints do not stem from the physical characteristics of the hyperspectral (HS) imaging sensors. We propose the self-supervised loss and the spatially and spectrally separable loss, respectively: 1) the self-supervised loss: different from the previous way of directly stacking the upsampled HSIs and MSIs as input, we expect the potentially processed HSIs to ensure not only the integrity of HSI information but also the most reasonable balance between overall spatial and spectral features. First, the preinterpolated HSIs are decomposed into subspaces as self-supervised labels. Then, a network is designed to learn subspace information and obtain the most discriminative features and 2) the separable loss: according to the physical characteristics of HSIs, the pixel-based mean square error loss is first divided into the domain loss and spectral domain loss, and then the similarity score of the images is calculated and used to construct the weighting coefficients of the two domain losses. Finally, the separable loss is jointly expressed by the weights. Experiments on public benchmark datasets indicate that the self-supervised loss and separable loss can improve the fusion performance.

中文翻译:

通过自监督损失和可分离损失进行高光谱和多光谱图像融合

融合低空间高光谱分辨率的高光谱图像(HSI)和高空间低光谱分辨率的多光谱图像(MSI)是提高空间分辨率的重要方法。现有的基于深度学习的图像融合技术通常忽略了神经网络理解差异特征的能力。此外,损耗限制并非源于高光谱 (HS) 成像传感器的物理特性。我们分别提出了自监督损失和空间和光谱可分离损失:1)自监督损失:与之前直接堆叠上采样 HSI 和 MSI 作为输入的方式不同,我们期望潜在处理的 HSI 不仅能确保 HSI 信息的完整性,还能确保整体空间和光谱特征之间最合理的平衡。首先,预插值的 HSI 被分解为子空间作为自监督标签。然后,设计一个网络来学习子空间信息并获得最具辨别力的特征和2)可分离损失:根据HSI的物理特性,首先将基于像素的均方误差损失分为域损失和谱域损失,然后计算图像的相似度得分并用于构建两个域损失的加权系数。最后,可分离损失由权重联合表示。
更新日期:2022-09-15
down
wechat
bug