当前位置: X-MOL 学术IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multi-GPU Implementation of Nearest Regularized Subspace Classifier for Hyperspectral Imagery
IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing ( IF 4.7 ) Pub Date : 2020-01-01 , DOI: 10.1109/jstars.2020.3004064
Zhixin Li , Jun Ni , Fan Zhang , Wei Li , Yongsheng Zhou

The classification of hyperspectral imagery (HSI) is an important part of HSI applications. The nearest-regularized subspace (NRS) is an effective method to classify HSI as one of the sparse representation methods. However, its high computational complexity confines usage in a time-critical scene. In order to enhance the computation efficiency of the NRS classifier, this article proposed a new parallel implementation on the graphics processing unit (GPU). First of all, an optimized single-GPU algorithm is designed for parallel computing, and then the multi-GPU version is developed to improve the efficiency of parallel computing. In addition, optimal parameters for the data stream and memory strategy are put forward to adapt a parallel environment. In order to verify the algorithm's effectiveness, the serial algorithm based on central processing unit is used for a comparative experiment. The performance of the multi-GPU approach is tested by two hyperspectral image datasets. Compared with the serial algorithm, the multi-GPU method with four GPUs can achieve up to $360\times$ acceleration.

中文翻译:

用于高光谱图像的最近正则化子空间分类器的多 GPU 实现

高光谱影像(HSI)的分类是HSI应用的重要组成部分。最近正则化子空间 (NRS) 是将 HSI 分类为稀疏表示方法之一的有效方法。然而,其高计算复杂度限制了在时间关键场景中的使用。为了提高NRS分类器的计算效率,本文在图形处理单元(GPU)上提出了一种新的并行实现。首先针对并行计算设计了优化的单GPU算法,然后开发了多GPU版本以提高并行计算的效率。此外,还提出了数据流的最佳参数和内存策略,以适应并行环境。为了验证算法的有效性,采用基于中央处理器的串行算法进行对比实验。多 GPU 方法的性能由两个高光谱图像数据集测试。与串行算法相比,具有四个 GPU 的多 GPU 方法可以实现高达 $360\times$ 的加速。
更新日期:2020-01-01
down
wechat
bug