当前位置: X-MOL 学术IEEE Trans. Geosci. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Variable Subpixel Convolution Based Arbitrary-Resolution Hyperspectral Pansharpening
IEEE Transactions on Geoscience and Remote Sensing ( IF 8.2 ) Pub Date : 2022-07-11 , DOI: 10.1109/tgrs.2022.3189624
Lin He 1 , Jinhua Xie 1 , Jun Li 2 , Antonio Plaza 3 , Jocelyn Chanussot 4 , Jiawei Zhu 1
Affiliation  

Standard hyperspectral (HS) pansharpening relies on fusion to enhance low-resolution HS (LRHS) images to the resolution of their matching panchromatic (PAN) images, whose practical implementation is normally under a stipulation of scale invariance of the model across the training phase and the pansharpening phase. By contrast, arbitrary resolution HS (ARHS) pansharpening seeks to pansharpen LRHS images to any user-customized resolutions. For such a new HS pansharpening task, it is not feasible to train and store convolution neural network (CNN) models for all possible candidate scales, which implies that the single model acquired from the training phase should be capable of being generalized to yield HS images with any resolutions in the pansharpening phase. To address the challenge, a novel variable subpixel convolution (VSPC)-based CNN (VSPC-CNN) method following our arbitrary upsampling CNN (AU-CNN) framework is developed for ARHS pansharpening. The VSPC-CNN method comprises a two-stage elevating thread. The first stage is to improve the spatial resolution of the input HS image to that of the PAN image through a prepansharpening module, and then, a VSPC-encapsulated arbitrary scale attention upsampling (ASAU) module is cascaded for arbitrary resolution adjustment. After training with given scales, it can be generalized to pansharpen HS image to arbitrary scales under the spatial patterns invariance across the training and pansharpening phases. Experimental results from several specific VSPC-CNNs on both simulated and real HS datasets show the superiority of the proposed method.

中文翻译:

基于可变亚像素卷积的任意分辨率高光谱全色锐化

标准高光谱 (HS) 全色锐化依赖于融合将低分辨率 HS (LRHS) 图像增强为其匹配的全色 (PAN) 图像的分辨率,其实际实施通常在模型在整个训练阶段的尺度不变性的规定下,并且全色锐化阶段。相比之下,任意分辨率 HS (ARHS) 全色锐化旨在将 LRHS 图像全色锐化为任何用户自定义的分辨率。对于这样一个新的 HS 全锐化任务,为所有可能的候选尺度训练和存储卷积神经网络 (CNN) 模型是不可行的,这意味着从训练阶段获得的单个模型应该能够被泛化以产生 HS 图像在全色锐化阶段的任何分辨率。为了应对挑战,在我们的任意上采样 CNN (AU-CNN) 框架之后,为 ARHS 全色锐化开发了一种新的基于可变亚像素卷积 (VSPC) 的 CNN (VSPC-CNN) 方法。VSPC-CNN 方法包括一个两阶段的提升线程。第一阶段是通过 prepansharpening 模块将输入 HS 图像的空间分辨率提高到 PAN 图像的空间分辨率,然后级联一个 VSPC 封装的任意尺度注意上采样 (ASAU) 模块进行任意分辨率调整。在用给定的尺度训练后,可以将 HS 图像泛化为在训练和全色锐化阶段的空间模式不变性下的任意尺度。在模拟和真实 HS 数据集上的几个特定 VSPC-CNN 的实验结果表明了所提出方法的优越性。
更新日期:2022-07-11
down
wechat
bug