当前位置: X-MOL 学术IEEE Trans. Geosci. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Unsupervised Self-Correlated Learning Smoothy Enhanced Locality Preserving Graph Convolution Embedding Clustering for Hyperspectral Images
IEEE Transactions on Geoscience and Remote Sensing ( IF 7.5 ) Pub Date : 2022-08-29 , DOI: 10.1109/tgrs.2022.3202865
Yao Ding 1 , Zhili Zhang 1 , Xiaofeng Zhao 1 , Wei Cai 1 , Nengjun Yang 1 , Haojie Hu 1 , Xianxiang Huang 2 , Yuan Cao 3 , Weiwei Cai 4
Affiliation  

Hyperspectral image (HSI) clustering is an extremely fundamental but challenging task with no labeled samples. Deep clustering methods have attracted increasing attention and have achieved remarkable success in HSI classification. However, most existing clustering methods are ineffective for large-scale HSI, due to their poor robustness, adaptability, and feature presentation. In this article, to address these issues, we introduce unsupervised self-correlated learning smoothy enhanced locality preserving graph convolution embedding clustering ( $\text{S}^{2}$ LGCC) for large-scale HSI. Specifically, the spectral-spatial transformation is introduced to transform the original HSI into a graph while preserving the local spectral features and spatial structures. After that, a locality preserving graph convolutional embedding encoder is designed to learn the hidden representation from the graph, in which the deep layer-wise graph convolutional network (LGAT) is proposed to preserve the adaptive layerwise locality features. In addition, the self-correlated learning smoothy module is developed to learn the smoothy information and the nonlocal relationship in the hidden representation space for clustering. Finally, a self-training strategy is proposed to cluster the graph node, in which a self-training clustering objective employs soft labels to supervise the clustering process. The proposed $\text{S}^{2}$ LGCC is jointly optimized by the fusion graph reconstruction loss and self-training clustering loss, and the two benefit each other. On Indian Pines (IP), Salinas, and UH2013 datasets, the overall accuracies (OAs) of our $\text{S}^{2}$ LGCC are 71.76%, 82.61%, and 63.82%, respectively.

中文翻译:

高光谱图像的无监督自相关学习平滑增强局部保持图卷积嵌入聚类

高光谱图像 (HSI) 聚类是一项非常基础但具有挑战性的任务,没有标记样本。深度聚类方法引起了越来越多的关注,并在 HSI 分类中取得了显着的成功。然而,大多数现有的聚类方法对于大规模 HSI 是无效的,因为它们的鲁棒性、适应性和特征表示都很差。在本文中,为了解决这些问题,我们引入了无监督自相关学习平滑增强局部性保持图卷积嵌入聚类( $\文本{S}^{2}$ LGCC) 用于大规模恒指。具体来说,引入光谱空间变换将原始 HSI 转换为图形,同时保留局部光谱特征和空间结构。之后,设计了一个局部保持图卷积嵌入编码器来从图中学习隐藏表示,其中提出了深层图卷积网络(LGAT)来保持自适应分层局部特征。此外,还开发了自相关学习平滑模块来学习平滑信息和隐藏表示空间中的非局部关系进行聚类。最后,提出了一种自训练策略对图节点进行聚类,其中自训练聚类目标采用软标签来监督聚类过程。提议的 $\文本{S}^{2}$ LGCC由融合图重建损失和自训练聚类损失共同优化,两者互惠互利。在 Indian Pines (IP)、Salinas 和 UH2013 数据集上,我们的整体准确度 (OA) $\文本{S}^{2}$ LGCC 分别为 71.76%、82.61% 和 63.82%。
更新日期:2022-08-29
down
wechat
bug