当前位置: X-MOL 学术IEEE Trans. Geosci. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Spatial鈥揝pectral Transformer With Cross-Attention for Hyperspectral Image Classification
IEEE Transactions on Geoscience and Remote Sensing ( IF 7.5 ) Pub Date : 9-1-2022 , DOI: 10.1109/tgrs.2022.3203476
Yishu Peng 1 , Yuwen Zhang 1 , Bing Tu 1 , Qianming Li 1 , Wujing Li 1
Affiliation  

Convolutional neural networks (CNNs) have been widely used in hyperspectral image (HSI) classification tasks because of their excellent local spatial feature extraction capabilities. However, because it is difficult to establish dependencies between long sequences of data for CNNs, there are limitations in the process of processing hyperspectral spectral sequence features. To overcome these limitations, inspired by the Transformer model, a spatial–spectral transformer with cross-attention (CASST) method is proposed. Overall, the method consists of a dual-branch structures, i.e., spatial and spectral sequence branches. The former is used to capture fine-grained spatial information of HSI, and the latter is adopted to extract the spectral features and establish interdependencies between spectral sequences. Specifically, to enhance the consistency among features and relieve computational burden, we design a spatial–spectral cross-attention module with weighted sharing to extract the interactive spatial–spectral fusion feature intra Transformer block, while also developing a spatial–spectral weighted sharing mechanism to capture the robust semantic feature inter Transformer block. Performance evaluation experiments are conducted on three hyperspectral classification datasets, demonstrating that the CASST method achieves better accuracy than the state-of-the-art Transformer classification models and mainstream classification networks.

中文翻译:


用于高光谱图像分类的具有交叉注意力的空间光谱变换器



卷积神经网络(CNN)因其出色的局部空间特征提取能力而被广泛应用于高光谱图像(HSI)分类任务。然而,由于CNN很难在长数据序列之间建立依赖关系,因此在处理高光谱光谱序列特征的过程中存在局限性。为了克服这些限制,受 Transformer 模型的启发,提出了一种具有交叉注意力的空间频谱变换器(CASST)方法。总体而言,该方法由双分支结构组成,即空间序列分支和谱序列分支。前者用于捕获HSI的细粒度空间信息,后者用于提取光谱特征并建立光谱序列之间的相互依赖关系。具体来说,为了增强特征之间的一致性并减轻计算负担,我们设计了一个具有加权共享的空间-光谱交叉注意模块,以提取 Transformer 块内的交互式空间-光谱融合特征,同时还开发了一种空间-光谱加权共享机制捕获 Transformer 块之间的鲁棒语义特征。在三个高光谱分类数据集上进行了性能评估实验,表明 CASST 方法比最先进的 Transformer 分类模型和主流分类网络具有更好的准确性。
更新日期:2024-08-28
down
wechat
bug