当前位置: X-MOL 学术J. Neural Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep learning multimodal fNIRS and EEG signals for bimanual grip force decoding
Journal of Neural Engineering ( IF 3.7 ) Pub Date : 2021-09-03 , DOI: 10.1088/1741-2552/ac1ab3
Pablo Ortega 1, 2 , A Aldo Faisal 1, 2, 3, 4
Affiliation  

Objective. Non-invasive brain-machine interfaces (BMIs) offer an alternative, safe and accessible way to interact with the environment. To enable meaningful and stable physical interactions, BMIs need to decode forces. Although previously addressed in the unimanual case, controlling forces from both hands would enable BMI-users to perform a greater range of interactions. We here investigate the decoding of hand-specific forces. Approach. We maximise cortical information by using electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) and developing a deep-learning architecture with attention and residual layers (cnnatt) to improve their fusion. Our task required participants to generate hand-specific force profiles on which we trained and tested our deep-learning and linear decoders. Main results. The use of EEG and fNIRS improved the decoding of bimanual force and the deep-learning models outperformed the linear model. In both cases, the greatest gain in performance was due to the detection of force generation. In particular, the detection of forces was hand-specific and better for the right dominant hand and cnnatt was better at fusing EEG and fNIRS. Consequently, the study of cnnatt revealed that forces from each hand were differently encoded at the cortical level. Cnnatt also revealed traces of the cortical activity being modulated by the level of force which was not previously found using linear models. Significance. Our results can be applied to avoid hand-cross talk during hand force decoding to improve the robustness of BMI robotic devices. In particular, we improve the fusion of EEG and fNIRS signals and offer hand-specific interpretability of the encoded forces which are valuable during motor rehabilitation assessment.



中文翻译:

用于双手握力解码的深度学习多模态 fNIRS 和 EEG 信号

客观的。非侵入性脑机接口 (BMI) 提供了一种与环境交互的替代、安全且可访问的方式。为了实现有意义且稳定的物理交互,BMI 需要解码力。尽管之前在单手案例中解决过,但双手控制力将使 BMI 用户能够进行更大范围的交互。我们在这里研究了手部特定力量的解码。方法。我们通过使用脑电图 (EEG) 和功能近红外光谱 (fNIRS) 并开发具有注意力和残差层的深度学习架构 ( cnnatt) 来改善它们的融合。我们的任务要求参与者生成特定于手的力配置文件,我们在这些配置文件上训练和测试我们的深度学习和线性解码器。主要结果。EEG 和 fNIRS 的使用改进了双手力的解码,深度学习模型优于线性模型。在这两种情况下,性能的最大提升是由于检测到力的产生。特别是,力的检测是针对手的,对惯用手更好,cnnatt擅长融合 EEG 和 fNIRS。因此,对cnnatt的研究表明,来自每只手的力在皮质水平上的编码不同。纳特还揭示了皮质活动受力水平调节的痕迹,这是以前使用线性模型未发现的。意义。我们的结果可用于避免手力解码期间的手部串扰,以提高 BMI 机器人设备的鲁棒性。特别是,我们改进了 EEG 和 fNIRS 信号的融合,并提供了编码力的手部特定可解释性,这在运动康复评估中很有价值。

更新日期:2021-09-03
down
wechat
bug