当前位置: X-MOL 学术Gigascience › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Multimodal signal dataset for 11 intuitive movement tasks from single upper extremity during multiple recording sessions
GigaScience ( IF 11.8 ) Pub Date :  , DOI: 10.1093/gigascience/giaa098
Ji-Hoon Jeong 1 , Jeong-Hyun Cho 1 , Kyung-Hwan Shim 1 , Byoung-Hee Kwon 1 , Byeong-Hoo Lee 1 , Do-Yeun Lee 1 , Dae-Hyeok Lee 1 , Seong-Whan Lee 1, 2
Affiliation  

Abstract
Background
Non-invasive brain–computer interfaces (BCIs) have been developed for realizing natural bi-directional interaction between users and external robotic systems. However, the communication between users and BCI systems through artificial matching is a critical issue. Recently, BCIs have been developed to adopt intuitive decoding, which is the key to solving several problems such as a small number of classes and manually matching BCI commands with device control. Unfortunately, the advances in this area have been slow owing to the lack of large and uniform datasets. This study provides a large intuitive dataset for 11 different upper extremity movement tasks obtained during multiple recording sessions. The dataset includes 60-channel electroencephalography, 7-channel electromyography, and 4-channel electro-oculography of 25 healthy participants collected over 3-day sessions for a total of 82,500 trials across all the participants.
Findings
We validated our dataset via neurophysiological analysis. We observed clear sensorimotor de-/activation and spatial distribution related to real-movement and motor imagery, respectively. Furthermore, we demonstrated the consistency of the dataset by evaluating the classification performance of each session using a baseline machine learning method.
Conclusions
The dataset includes the data of multiple recording sessions, various classes within the single upper extremity, and multimodal signals. This work can be used to (i) compare the brain activities associated with real movement and imagination, (ii) improve the decoding performance, and (iii) analyze the differences among recording sessions. Hence, this study, as a Data Note, has focused on collecting data required for further advances in the BCI technology.


中文翻译:


多模态信号数据集,用于在多个记录会话期间来自单个上肢的 11 个直观运动任务


 抽象的
 背景

非侵入性脑机接口(BCI)的开发是为了实现用户与外部机器人系统之间的自然双向交互。然而,用户和BCI系统之间通过人工匹配进行通信是一个关键问题。最近,BCI已发展为采用直观解码,这是解决类数量少以及手动将BCI命令与设备控制相匹配等问题的关键。不幸的是,由于缺乏大型且统一的数据集,该领域的进展缓慢。这项研究为在多次记录过程中获得的 11 种不同的上肢运动任务提供了一个大型直观数据集。该数据集包括 25 名健康参与者在 3 天的时间内收集的 60 通道脑电图、7 通道肌电图和 4 通道眼电图,所有参与者总共进行了 82,500 次试验。
 发现

我们通过神经生理学分析验证了我们的数据集。我们分别观察到与真实运动和运动想象相关的清晰的感觉运动去/激活和空间分布。此外,我们通过使用基线机器学习方法评估每个会话的分类性能来证明数据集的一致性。
 结论

该数据集包括多个记录会话的数据、单个上肢内的各种类别以及多模态信号。这项工作可用于(i)比较与真实运动和想象相关的大脑活动,(ii)提高解码性能,以及(iii)分析记录会话之间的差异。因此,本研究作为数据说明,重点收集 BCI 技术进一步发展所需的数据。
更新日期:2020-10-08
down
wechat
bug