当前位置: X-MOL 学术IEEE Trans. Neural Netw. Learn. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Topological Structure and Semantic Information Transfer Network for Cross-Scene Hyperspectral Image Classification
IEEE Transactions on Neural Networks and Learning Systems ( IF 10.4 ) Pub Date : 2021-09-16 , DOI: 10.1109/tnnls.2021.3109872
Yuxiang Zhang , Wei Li , Mengmeng Zhang , Ying Qu , Ran Tao , Hairong Qi

Domain adaptation techniques have been widely applied to the problem of cross-scene hyperspectral image (HSI) classification. Most existing methods use convolutional neural networks (CNNs) to extract statistical features from data and often neglect the potential topological structure information between different land cover classes. CNN-based approaches generally only model the local spatial relationships of the samples, which largely limits their ability to capture the nonlocal topological relationship that would better represent the underlying data structure of HSI. In order to make up for the above shortcomings, a Topological structure and Semantic information Transfer network (TSTnet) is developed. The method employs the graph structure to characterize topological relationships and the graph convolutional network (GCN) that is good at processing for cross-scene HSI classification. In the proposed TSTnet, graph optimal transmission (GOT) is used to align topological relationships to assist distribution alignment between the source domain and the target domain based on the maximum mean difference (MMD). Furthermore, subgraphs from the source domain and the target domain are dynamically constructed based on CNN features to take advantage of the discriminative capacity of CNN models that, in turn, improve the robustness of classification. In addition, to better characterize the correlation between distribution alignment and topological relationship alignment, a consistency constraint is enforced to integrate the output of CNN and GCN. Experimental results on three cross-scene HSI datasets demonstrate that the proposed TSTnet performs significantly better than some state-of-the-art domain-adaptive approaches. The codes will be available from the website: https://github.com/YuxiangZhang-BIT/IEEE_TNNLS_TSTnet .

中文翻译:

用于跨场景高光谱图像分类的拓扑结构和语义信息传输网络

域自适应技术已广泛应用于跨场景高光谱图像 (HSI) 分类问题。大多数现有方法使用卷积神经网络 (CNN) 从数据中提取统计特征,并且往往忽略不同土地覆盖类别之间潜在的拓扑结构信息。基于 CNN 的方法通常只对样本的局部空间关系进行建模,这在很大程度上限制了它们捕获非局部拓扑关系的能力,而这种非局部拓扑关系可以更好地表示 HSI 的底层数据结构。为了弥补上述不足,开发了拓扑结构和语义信息传输网络(TSTnet)。该方法采用图结构来表征拓扑关系和擅长处理跨场景HSI分类的图卷积网络(GCN)。在提出的 TSTnet 中,图最优传输 (GOT) 用于对齐拓扑关系,以基于最大均值差 (MMD) 辅助源域和目标域之间的分布对齐。此外,源域和目标域的子图是基于 CNN 特征动态构建的,以利用 CNN 模型的判别能力,从而提高分类的鲁棒性。此外,为了更好地表征分布对齐和拓扑关系对齐之间的相关性,强制执行一致性约束以集成 CNN 和 GCN 的输出。三个跨场景 HSI 数据集的实验结果表明,所提出的 TSTnet 的性能明显优于一些最先进的领域自适应方法。这些代码可从以下网站获得:https://github.com/YuxiangZhang-BIT/IEEE_TNNLS_TSTnet .
更新日期:2021-09-16
down
wechat
bug