Hyper-ES2T: Efficient Spatial–Spectral Transformer for the classification of hyperspectral remote sensing images

https://doi.org/10.1016/j.jag.2022.103005Get rights and content
Under a Creative Commons license
open access

Highlights

  • The first Transformer-based bilateral classification network for HSIs processing.

  • A strong baseline for HSI classification with hierarchical architecture.

  • Simultaneously exploit all the merits of convolution, Transformer and MLP.

  • AFEM and EMHSA are proposed for feature enhancement and improved efficiency.

  • State-of-the-art performance on four benchmark datasets for HSI classification.

Abstract

In recent years, convolutional neural networks have continuously dominated the downstream tasks on hyperspectral remote sensing images with its strong local feature extraction capability. However, convolution operations cannot effectively capture the long-range dependencies and repeatedly stacking convolutional layers to pursue a hierarchical structure can only make this problem alleviated but not completely solved. Meantime, the appearance of Transformer happens to cope with this problem and provides an opportunity to capture long-distance dependencies between tokens. Although Transformer has been introduced into HSI classification field recently, most of these related works only focus on exploiting a single kind of spatial or spectral information and neglect to explore the optimal fusion method for these two different-level features. Therefore, to fully exploit the abundant spatial information and spectral correlations in HSIs in a highly effective and efficient way, we present the initial attempt to explore the Transformer architecture in a dual-branch manner and propose a novel bilateral classification network named Hyper-ES2T. Besides, the Aggregated Feature Enhancement Module is proposed for effective feature aggregation and further spatial–spectral feature enhancement. Furthermore, to tackle the problem of high computational costs brought by vanilla self-attention block in Transformer, we design the Efficient Multi-Head Self-Attention block, pursuing the trade-off between model accuracy and efficiency. The proposed Hyper-ES2T reaches new state-of-the-art performance and outperforms previous methods by a significant margin on four benchmark datasets for HSI classification, which demonstrates the powerful generalization ability and superior feature representation capability of our Hyper-ES2T. It can be anticipated that this work provides a novel insight to design network architecture based on Transformer with superior performance and great model efficiency, which may inspire more following research in this direction of HSI processing field. The source codes will be available at https://github.com/Wenxuan-1119/Hyper-ES2T.

Keywords

Deep learning
Hyperspectral image classification
Remote sensing
Spatial–spectral Transformer
Feature enhancement
Bilateral classification network

Data availability

I have shared the link to my code in the abstract of the revised manuscript.

Cited by (0)