当前位置: X-MOL 学术J. Neural Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Enhancing transfer performance across datasets for brain-computer interfaces using a combination of alignment strategies and adaptive batch normalization
Journal of Neural Engineering ( IF 3.7 ) Pub Date : 2021-09-02 , DOI: 10.1088/1741-2552/ac1ed2
Lichao Xu 1 , Minpeng Xu 1, 2 , Zhen Ma 2 , Kun Wang 1 , Tzyy-Ping Jung 1, 2, 3 , Dong Ming 1, 2
Affiliation  

Objective. Recently, transfer learning (TL) and deep learning (DL) have been introduced to solve intra- and inter-subject variability problems in brain-computer interfaces (BCIs). However, current TL and DL algorithms are usually validated within a single dataset, assuming that data of the test subjects are acquired under the same condition as that of training (source) subjects. This assumption is generally violated in practice because of different acquisition systems and experimental settings across studies and datasets. Thus, the generalization ability of these algorithms needs further validations in a cross-dataset scenario, which is closer to the actual situation. This study compared the transfer performance of pre-trained deep-learning models with different preprocessing strategies in a cross-dataset scenario. Approach. This study used four publicly available motor imagery datasets, each was successively selected as a source dataset, and the others were used as target datasets. EEGNet and ShallowConvNet with four preprocessing strategies, namely channel normalization, trial normalization, Euclidean alignment, and Riemannian alignment, were trained with the source dataset. The transfer performance of pre-trained models was validated on the target datasets. This study also used adaptive batch normalization (AdaBN) for reducing interval covariate shift across datasets. This study compared the transfer performance of using the four preprocessing strategies and that of a baseline approach based on manifold embedded knowledge transfer (MEKT). This study also explored the possibility and performance of fusing MEKT and EEGNet. Main results. The results show that DL models with alignment strategies had significantly better transfer performance than the other two preprocessing strategies. As an unsupervised domain adaptation method, AdaBN could also significantly improve the transfer performance of DL models. The transfer performance of DL models that combined AdaBN and alignment strategies significantly outperformed MEKT. Moreover, the generalizability of EEGNet models that combined AdaBN and alignment strategies could be further improved via the domain adaptation step in MEKT, achieving the best generalization ability among multiple datasets (BNCI2014001: 0.788, PhysionetMI: 0.679, Weibo2014: 0.753, Cho2017: 0.650). Significance. The combination of alignment strategies and AdaBN could easily improve the generalizability of DL models without fine-tuning. This study may provide new insights into the design of transfer neural networks for BCIs by separating source and target batch normalization layers in the domain adaptation process.



中文翻译:

使用对齐策略和自适应批量归一化的组合提高脑机接口数据集之间的传输性能

目标。最近,转移学习 (TL) 和深度学习 (DL) 已被引入以解决脑机接口 (BCI) 中的主体内和主体间可变性问题。然而,当前的 TL 和 DL 算法通常在单个数据集中进行验证,假设测试对象的数据是在与训练(源)对象相同的条件下获取的。由于跨研究和数据集的不同采集系统和实验设置,该假设在实践中通常被违反。因此,这些算法的泛化能力需要在更接近实际情况的跨数据集场景中进一步验证。本研究比较了具有不同预处理策略的预训练深度学习模型在跨数据集场景中的迁移性能。方法。本研究使用了四个公开可用的运动图像数据集,每个数据集依次被选为源数据集,其他用作目标数据集。EEGNet 和 ShallowConvNet 具有四种预处理策略,即通道归一化、试验归一化、欧几里德对齐和黎曼对齐,使用源数据集进行训练。预训练模型的迁移性能在目标数据集上得到验证。本研究还使用自适应批量归一化 (AdaBN) 来减少跨数据集的区间协变量偏移。本研究比较了使用四种预处理策略和基于流形嵌入知识转移 (MEKT) 的基线方法的转移性能。本研究还探讨了融合 MEKT 和 EEGNet 的可能性和性能。主要结果。结果表明,采用对齐策略的 DL 模型比其他两种预处理策略具有明显更好的传输性能。作为一种无监督域自适应方法,AdaBN 还可以显着提高 DL 模型的传输性能。结合 AdaBN 和对齐策略的 DL 模型的传输性能显着优于 MEKT。此外,结合 AdaBN 和对齐策略的 EEGNet 模型的泛化能力可以通过 MEKT 中的域适应步骤进一步提高,在多个数据集之间实现最佳泛化能力 (BNCI2014001: 0.788, PhysionetMI: 0.679, Weibo2014: 0.753, Cho2017: 0. . 意义. 对齐策略和 AdaBN 的结合可以轻松提高 DL 模型的泛化能力,而无需微调。这项研究可能通过在域适应过程中分离源和目标批量归一化层,为 BCI 的转移神经网络的设计提供新的见解。

更新日期:2021-09-02
down
wechat
bug