当前位置: X-MOL 学术IEEE Trans. Netural Syst. Rehabil. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep Temporal-Spatial Feature Learning for Motor Imagery-Based Brain–Computer Interfaces
IEEE Transactions on Neural Systems and Rehabilitation Engineering ( IF 4.9 ) Pub Date : 2020-09-21 , DOI: 10.1109/tnsre.2020.3023417
Junjian Chen , Zhuliang Yu , Zhenghui Gu , Yuanqing Li

Motor imagery (MI) decoding is an important part of brain-computer interface (BCI) research, which translates the subject’s intentions into commands that external devices can execute. The traditional methods for discriminative feature extraction, such as common spatial pattern (CSP) and filter bank common spatial pattern (FBCSP), have only focused on the energy features of the electroencephalography (EEG) and thus ignored the further exploration of temporal information. However, the temporal information of spatially filtered EEG may be critical to the performance improvement of MI decoding. In this paper, we proposed a deep learning approach termed filter-bank spatial filtering and temporal-spatial convolutional neural network (FBSF-TSCNN) for MI decoding, where the FBSF block transforms the raw EEG signals into an appropriate intermediate EEG presentation, and then the TSCNN block decodes the intermediate EEG signals. Moreover, a novel stage-wise training strategy is proposed to mitigate the difficult optimization problem of the TSCNN block in the case of insufficient training samples. Firstly, the feature extraction layers are trained by optimization of the triplet loss. Then, the classification layers are trained by optimization of the cross-entropy loss. Finally, the entire network (TSCNN) is fine-tuned by the back-propagation (BP) algorithm. Experimental evaluations on the BCI IV 2a and SMR-BCI datasets reveal that the proposed stage-wise training strategy yields significant performance improvement compared with the conventional end-to-end training strategy, and the proposed approach is comparable with the state-of-the-art method.

中文翻译:

基于运动图像的脑机接口的深度时空特征学习

运动图像(MI)解码是脑机接口(BCI)研究的重要组成部分,该技术将受试者的意图转换为外部设备可以执行的命令。用于区分特征的传统方法,例如公共空间模式(CSP)和滤波器组公共空间模式(FBCSP),仅关注脑电图(EEG)的能量特征,因此忽略了对时间信息的进一步探索。但是,空间滤波后的EEG的时间信息可能对MI解码的性能提高至关重要。在本文中,我们提出了一种深度学习方法,称为MI解码的滤波器组空间滤波和时空卷积神经网络(FBSF-TSCNN),FBSF块将原始EEG信号转换为适当的中间EEG表示,然后TSCNN块对中间EEG信号进行解码。此外,提出了一种新颖的分阶段训练策略,以减轻训练样本不足时TSCNN块的优化难题。首先,通过优化三重态损失来训练特征提取层。然后,通过优化交叉熵损失来训练分类层。最后,通过反向传播(BP)算法对整个网络(TSCNN)进行微调。对BCI IV 2a和SMR-BCI数据集的实验评估表明,与常规的端到端训练策略相比,提出的分阶段训练策略可显着改善性能,
更新日期:2020-11-12
down
wechat
bug