当前位置: X-MOL 学术Front. Neurorobotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Face-Computer Interface (FCI): Intent Recognition Based on Facial Electromyography (fEMG) and Online Human-Computer Interface with Audiovisual Feedback
Frontiers in Neurorobotics ( IF 2.6 ) Pub Date : 2021-06-21 , DOI: 10.3389/fnbot.2021.692562
Bo Zhu 1, 2, 3 , Daohui Zhang 1, 2 , Yaqi Chu 1, 2, 3 , Xingang Zhao 1, 2 , Lixin Zhang 4 , Lina Zhao 4
Affiliation  

Patients who have lost limb control ability, such as upper limb amputation and high paraplegia, are usually unable to take care of themselves. Establishing a natural, stable and comfortable human-computer interface (HCI) for controlling rehabilitation assistance robots and other controllable equipments will solve a lot of their troubles. In this study, a complete limbs-free face-computer interface (FCI) framework based on facial electromyography (fEMG) including offline analysis and online control of mechanical equipments was proposed. Six facial movements related to eyebrows, eyes, and mouth were used in this FCI. In the offline stage, 12 models, 8 types of features, and 3 different feature combination methods for model inputing were studied and compared in detail. In the online stage, four well-designed sessions were introduced to control a robotic arm to complete drinking water task in three ways (by touch screen, by fEMG with and without audio feedback) for verification and performance comparison of proposed FCI framework. Three features and one model with an average offline recognition accuracy of 95.3\%, a maximum of 98.8\%, and a minimum of 91.4\% were selected for use in online scenarios. In contrast, the way with audio feedback performed better than that without audio feedback. All subjects completed the drinking task in a few minutes with FCI. The average and smallest time difference between touch screen and fEMG under audio feedback were only 1.24 min and 0.37 min respectively.

中文翻译:

Face-Computer Interface (FCI):基于面部肌电图 (fEMG) 的意图识别和具有视听反馈的在线人机界面

上肢截肢、高位截瘫等肢体控制能力丧失的患者,通常无法自理。建立一个自然、稳定、舒适的人机界面(HCI)来控制康复辅助机器人和其他可控设备,将解决他们的很多麻烦。在这项研究中,提出了一种基于面部肌电图(fEMG)的完整的无肢体人脸计算机接口(FCI)框架,包括机械设备的离线分析和在线控制。在这个 FCI 中使用了与眉毛、眼睛和嘴巴相关的六种面部运动。离线阶段对12种模型、8种特征、3种不同的模型输入特征组合方式进行了详细的研究和比较。在线上阶段,引入了四个精心设计的会话来控制机械臂以三种方式(通过触摸屏,通过有和没有音频反馈的 fEMG)完成饮水任务,以验证和比较所提出的 FCI 框架的性能。选取了平均离线识别准确率为 95.3\%、最高 98.8\%、最低 91.4\% 的三个特征和一个模型用于在线场景。相比之下,有音频反馈的方式比没有音频反馈的方式表现更好。所有受试者都在几分钟内用 FCI 完成了饮酒任务。音频反馈下触摸屏和 fEMG 的平均和最小时间差分别仅为 1.24 分钟和 0.37 分钟。通过带有和不带有音频反馈的 fEMG)用于所提出的 FCI 框架的验证和性能比较。选取了平均离线识别准确率为 95.3\%、最高 98.8\%、最低 91.4\% 的三个特征和一个模型用于在线场景。相比之下,有音频反馈的方式比没有音频反馈的方式表现更好。所有受试者都在几分钟内用 FCI 完成了饮酒任务。音频反馈下触摸屏和 fEMG 的平均和最小时间差分别仅为 1.24 分钟和 0.37 分钟。通过带有和不带有音频反馈的 fEMG)用于所提出的 FCI 框架的验证和性能比较。选取了平均离线识别准确率为 95.3\%、最高 98.8\%、最低 91.4\% 的三个特征和一个模型用于在线场景。相比之下,有音频反馈的方式比没有音频反馈的方式表现更好。所有受试者都在几分钟内用 FCI 完成了饮酒任务。音频反馈下触摸屏和 fEMG 的平均和最小时间差分别仅为 1.24 分钟和 0.37 分钟。有音频反馈的方式比没有音频反馈的方式表现更好。所有受试者都在几分钟内用 FCI 完成了饮酒任务。音频反馈下触摸屏和 fEMG 的平均和最小时间差分别仅为 1.24 分钟和 0.37 分钟。有音频反馈的方式比没有音频反馈的方式表现更好。所有受试者都在几分钟内用 FCI 完成了饮酒任务。音频反馈下触摸屏和 fEMG 的平均和最小时间差分别仅为 1.24 分钟和 0.37 分钟。
更新日期:2021-06-21
down
wechat
bug