当前位置: X-MOL 学术Int. J. Neural Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A TrAdaBoost Method for Detecting Multiple Subjects’ N200 and P300 Potentials Based on Cross-Validation and an Adaptive Threshold
International Journal of Neural Systems ( IF 6.6 ) Pub Date : 2019-12-23 , DOI: 10.1142/s0129065720500094
Mengfan Li 1 , Fang Lin 1 , Guizhi Xu 1
Affiliation  

Traditional training methods need to collect a large amount of data for every subject to train a subject-specific classifier, which causes subjects fatigue and training burden. This study proposes a novel training method, TrAdaBoost based on cross-validation and an adaptive threshold (CV-T-TAB), to reduce the amount of data required for training by selecting and combining multiple subjects’ classifiers that perform well on a new subject to train a classifier. This method adopts cross-validation to extend the amount of the new subject’s training data and sets an adaptive threshold to select the optimal combination of the classifiers. Twenty-five subjects participated in the N200- and P300-based brain–computer interface. The study compares CV-T-TAB to five traditional training methods by testing them on the training of a support vector machine. The accuracy, information transfer rate, area under the curve, recall and precision are used to evaluate the performances under nine conditions with different amounts of data. CV-T-TAB outperforms the other methods and retains a high accuracy even when the amount of data is reduced to one-third of the original amount. The results imply that CV-T-TAB is effective in improving the performance of a subject-specific classifier with a small amount of data by adopting multiple subjects’ classifiers, which reduces the training cost.

中文翻译:

一种基于交叉验证和自适应阈值检测多个受试者 N200 和 P300 电位的 TrAdaBoost 方法

传统的训练方法需要为每个被试收集大量数据来训练一个特定于被试的分类器,这会导致被试疲劳和训练负担。本研究提出了一种新的训练方法,即基于交叉验证和自适应阈值 (CV-T-TAB) 的 TrAdaBoost,通过选择和组合在新主题上表现良好的多个主题分类器来减少训练所需的数据量训练分类器。该方法采用交叉验证来扩展新主体的训练数据量,并设置自适应阈值来选择分类器的最佳组合。25 名受试者参与了基于 N200 和 P300 的脑机接口。该研究通过对支持向量机的训练进行测试,将 CV-T-TAB 与五种传统训练方法进行比较。准确率、信息传输率、曲线下面积、召回率和精度用于评估在不同数据量的九种条件下的性能。CV-T-TAB 优于其他方法,即使在数据量减少到原始量的三分之一时也能保持高精度。结果表明,CV-T-TAB通过采用多学科分类器有效地提高了具有少量数据的学科特异性分类器的性能,从而降低了训练成本。
更新日期:2019-12-23
down
wechat
bug