当前位置: X-MOL 学术Neurocomputing › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Subject-based Dipole Selection for Decoding Motor Imagery Tasks
Neurocomputing ( IF 5.5 ) Pub Date : 2020-08-01 , DOI: 10.1016/j.neucom.2020.03.055
Ming-ai Li , Yu-xin Dong , Yan-jun Sun , Jin-fu Yang , Li-juan Duan

Abstract In the BCI rehabilitation system, the decoding of motor imagery tasks (MI-tasks) with dipoles in the source domain has gradually become a new research focus. For complex multiclass MI-tasks, the number of activated dipoles is large, and the activation area, activation time and intensity are also different for different subjects. The means by which to identify fewer subject-based dipoles is very important. There exist two main methods of dipole selection: one method is based on the physiological functional partition theory, and the other method is based on human experience. However, the number of dipoles that are selected by the two methods is still large and contains information redundancy, and the selected dipoles are the same in both number and position for different subjects, which is not necessarily ideal for distinguishing different MI-tasks. In this paper, the data-driven method is used to preliminarily select fully activated dipoles with large amplitudes; the obtained dipoles are refined by using continuous wavelet transform (CWT) to best reflect the differences among the multiclass MI-tasks, thereby yielding a subject-based dipole selection method, which is named PRDS. PRDS is further used to decode multiclass MI-tasks in which some representative dipoles are found, and their wavelet coefficient power is calculated and input to one-vs.-one common spatial pattern (OVO-CSP) for feature extraction, and the features are classified by the support vector machine. We denote this decoding method as D-CWTCSP, which enhances the spatial resolution and also makes full use of the time-frequency-spatial domain information. Experiments are carried out using a public dataset with nine subjects and four classes of MI-tasks, and the proposed D-CWTCSP is compared with the relevant methods in sensor space and brain-source space in terms of the decoding accuracy, standard deviation, recall rate and kappa value. The experimental results show that D-CWTCSP reaches an average decoding accuracy of 82.66% among the nine subjects, which generates 8–20% improvement over other methods, thus reflecting its great superiority in decoding accuracy.

中文翻译:

用于解码运动图像任务的基于主题的偶极子选择

摘要 在 BCI 康复系统中,源域偶极子运动图像任务(MI-tasks)的解码逐渐成为新的研究热点。对于复杂的多类 MI 任务,激活的偶极子数量较多,并且不同主体的激活面积、激活时间和强度也不同。识别较少基于主题的偶极子的方法非常重要。偶极子选择主要有两种方法:一种方法基于生理功能划分理论,另一种方法基于人类经验。但是,两种方法选择的偶极子数量仍然很大并且包含信息冗余,并且对于不同的受试者,选择的偶极子在数量和位置上都是相同的,这对于区分不同的 MI 任务不一定是理想的。本文采用数据驱动的方法,初步选择幅度较大的全激活偶极子;通过使用连续小波变换 (CWT) 对获得的偶极子进行细化,以最好地反映多类 MI 任务之间的差异,从而产生一种基于主题的偶极子选择方法,称为 PRDS。PRDS 进一步用于解码多类 MI 任务,其中找到了一些具有代表性的偶极子,计算它们的小波系数功率并输入到一对一公共空间模式(OVO-CSP)进行特征提取,特征是支持向量机分类。我们将这种解码方法记为 D-CWTCSP,它提高了空间分辨率,也充分利用了时频空域信息。使用具有九个主题和四类 MI 任务的公共数据集进行实验,并在解码精度、标准偏差、召回率方面将所提出的 D-CWTCSP 与传感器空间和脑源空间中的相关方法进行比较率和 kappa 值。实验结果表明,D-CWTCSP在9个科目中的平均解码准确率达到了82.66%,比其他方法提高了8-20%,体现了其在解码准确率上的巨大优势。
更新日期:2020-08-01
down
wechat
bug