当前位置: X-MOL 学术Int. J. Adv. Robot. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Transformation classification of human squat/sit-to-stand based on multichannel information fusion
International Journal of Advanced Robotic Systems ( IF 2.1 ) Pub Date : 2022-07-10 , DOI: 10.1177/17298806221103708
Yu Wang 1, 2 , Quanjun Song 1 , Tingting Ma 1 , Yong Chen 1 , Hao Li 1 , Rongkai Liu 1, 2
Affiliation  

In existing rehabilitation training, research on the accuracy of recognizing completed actions has achieved good results; however, the reduction in the misjudgment rate in the action conversion process needs further research. This article proposes a multichannel information fusion method for the movement conversion process of squat/sit-to-stand, which can help online movement conversion classification during rehabilitation training. We collected a training dataset from a total of eight subjects performing three different motions, including half squat, full squat, and sitting, equipped with plantar pressure sensors, RGB cameras, and five inertial measurement units. Our evaluation includes the misjudgment rate for each action and the time needed for classification. The experimental results show that, compared with the recognition of a single sensor, the accuracy after fusion can reach 96.6% in the case of no occlusion and 86.7% in the case of occlusion. Compared with the complete time window, the classification time window is shortened by approximately 25%.



中文翻译:

基于多通道信息融合的人体深蹲/坐立变换分类

在现有的康复训练中,对完成动作识别准确性的研究取得了较好的效果;但是,动作转换过程中误判率的降低需要进一步研究。本文针对深蹲/坐立的动作转换过程提出了一种多通道信息融合方法,可以帮助康复训练过程中的在线动作转换分类。我们从总共八名受试者中收集了一个训练数据集,这些受试者执行三种不同的运动,包括半蹲、全蹲和坐着,配备足底压力传感器、RGB 相机和五个惯性测量单元。我们的评估包括每个动作的误判率和分类所需的时间。实验结果表明,与单个传感器的识别相比,融合后的准确率在没有遮挡的情况下可以达到96.6%,在有遮挡的情况下可以达到86.7%。与完整时间窗相比,分类时间窗缩短了约25%。

更新日期:2022-07-11
down
wechat
bug