当前位置: X-MOL 学术J. Neurosci. Methods › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
MI-EEGNET: A novel Convolutional Neural Network for motor imagery classification
Journal of Neuroscience Methods ( IF 2.7 ) Pub Date : 2020-12-15 , DOI: 10.1016/j.jneumeth.2020.109037
Mouad Riyad 1 , Mohammed Khalil 1 , Abdellah Adib 1
Affiliation  

Background

Brain-Computer Interfaces (BCI) permits humans to interact with machines by decoding brainwaves to command for a variety of purposes. Convolutional Neural Networks (ConvNet) have improved the state-of-the-art of Motor Imagery decoding in an end-to-end approach. However, shallow ConvNets usually perform better than their deep counterparts. Thus, we aim to design a novel ConvNet that is deeper than the existing models, with an increase in terms of performances, and with optimal complexity.

New Method

We develop a ConvNet based on Inception and Xception architectures that uses convolutional layers to extract temporal and spatial features. We adopt separable convolutions and depthwise convolutions to enable faster and efficient ConvNet. Then, we introduce a new block that is inspired by Inception to learn more rich features to improve the classification performances.

Results

The obtained results are comparable with other state-of-the-art techniques. Also, the weights of the convolutional layers give us some insights onto the learned features and reveal the most relevant ones.

Comparison with Existing Method(s)

We show that our model significantly outperforms Filter Bank Common Spatial Pattern (FBCSP), Riemannian Geometry (RG) approaches, and ShallowConvNet (p < 0.05).

Conclusions

The obtained results prove that Motor Imagery decoding is possible without handcrafted features.



中文翻译:

MI-EEGNET:用于运动图像分类的新型卷积神经网络

背景

脑机接口(BCI)允许人类通过解码脑波来与各种机器进行交互以进行命令。卷积神经网络(ConvNet)以端到端的方法改进了Motor Imagery解码的最新技术。但是,浅卷积网络通常比深卷积网络性能更好。因此,我们的目标是设计一种比现有模型更深的新颖ConvNet,并提高性能并优化复杂性。

新方法

我们基于Inception和Xception架构开发了一个ConvNet,该架构使用卷积层提取时间和空间特征。我们采用可分离的卷积和深度卷积,以实现更快,更高效的ConvNet。然后,我们引入了一个受Inception启发的新块,以学习更多丰富的功能来改善分类性能。

结果

所获得的结果与其他最新技术相当。此外,卷积层的权重还使我们对学习的特征有一些见解,并揭示了最相关的特征。

与现有方法的比较

我们表明,我们的模型明显优于过滤器库公共空间模式(FBCSP),黎曼几何(RG)方法和ShallowConvNet(p  <0.05)。

结论

获得的结果证明,无需手工功能就可以进行Motor Imagery解码。

更新日期:2020-12-16
down
wechat
bug