当前位置: X-MOL 学术Electronics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Research on Upper Limb Action Intention Recognition Method Based on Fusion of Posture Information and Visual Information
Electronics ( IF 2.9 ) Pub Date : 2022-09-27 , DOI: 10.3390/electronics11193078
Jian-Wei Cui , Han Du , Bing-Yan Yan , Xuan-Jie Wang

A prosthetic hand is one of the main ways to help patients with upper limb disabilities regain their daily living abilities. Prosthetic hand manipulation must be coordinated with the user’s action intention. Therefore, the key to the control of the prosthetic hand is to recognize the action intention of the upper limb. At present, there are still problems such as difficulty in decoding information and a low recognition rate of identifying action intention with EMG signals and EEG signals. While inertial sensors have the advantages of low cost and high accuracy and posture information can characterize the upper limb motion state, visual information has the advantages of high information and being able to detect the type of target objects, which can be complementarily fused with inertial sensors to further grasp the human motion requirements. Therefore, this paper proposes an upper limb action intention recognition method based on the fusion of posture information and visual information. The inertial sensor is used to collect the attitude angle data during the movement of the upper limb, and according to the similarity of the human upper limb structure to the linkage mechanism, a model of the upper limb of the human body is established using the positive kinematics theory of a mechanical arm to solve the upper limb end positions. The upper limb end positions were classified into three categories: torso front, upper body nearby, and the initial position, and a multilayer perceptron model was trained to learn the classification relationships. In addition, a miniature camera was installed on the hand to obtain visual image information during upper limb movement. The target objects are detected using the YOLOv5 deep learning method, and then, the target objects are classified into two categories: wearable items and non-wearable items. Finally, the upper limb intention is jointly decided by the upper limb motion state, target object type, and upper limb end position to achieve the control of the prosthetic hand. We applied the upper limb intention recognition method to the experimental system of a mechanical prosthetic hand and invited several volunteers to test it. The experimental results showed that the intention recognition success rate reached 92.4%, which verifies the feasibility and practicality of the upper limb action intention recognition method based on the fusion of posture information and visual information.

中文翻译:

基于姿态信息和视觉信息融合的上肢动作意图识别方法研究

假手是帮助上肢残疾患者恢复日常生活能力的主要方式之一。假手操作必须与用户的动作意图相协调。因此,假手控制的关键是识别上肢的动作意图。目前,还存在信息解码困难、用肌电信号和脑电信号识别动作意图的识别率不高等问题。惯性传感器具有成本低、精度高的优势,姿态信息可以表征上肢运动状态,而视觉信息具有信息量大、能够检测目标物体类型的优势,可以与惯性传感器互补融合进一步把握人体运动需求。因此,本文提出了一种基于姿态信息和视觉信息融合的上肢动作意图识别方法。利用惯性传感器采集上肢运动过程中的姿态角数据,根据人体上肢结构与连杆机构的相似性,利用正向建立人体上肢模型。机械臂运动学理论求解上肢末端位置。将上肢末端位置分为躯干前部、上身附近和初始位置三类,并训练多层感知器模型学习分类关系。此外,在手上安装了一个微型摄像头,以获取上肢运动过程中的视觉图像信息。使用YOLOv5深度学习方法检测目标物体,然后将目标物体分为两类:可穿戴物品和不可穿戴物品。最后由上肢运动状态、目标物体类型、上肢末端位置共同决定上肢意图,实现对假手的控制。我们将上肢意图识别方法应用于机械假手的实验系统,并邀请多名志愿者进行测试。实验结果表明,意图识别成功率达到92.4%,验证了基于姿态信息和视觉信息融合的上肢动作意图识别方法的可行性和实用性。目标对象分为两类:可穿戴物品和不可穿戴物品。最后由上肢运动状态、目标物体类型、上肢末端位置共同决定上肢意图,实现对假手的控制。我们将上肢意图识别方法应用于机械假手的实验系统,并邀请多名志愿者进行测试。实验结果表明,意图识别成功率达到92.4%,验证了基于姿态信息和视觉信息融合的上肢动作意图识别方法的可行性和实用性。目标对象分为两类:可穿戴物品和不可穿戴物品。最后由上肢运动状态、目标物体类型、上肢末端位置共同决定上肢意图,实现对假手的控制。我们将上肢意图识别方法应用于机械假手的实验系统,并邀请多名志愿者进行测试。实验结果表明,意图识别成功率达到92.4%,验证了基于姿态信息和视觉信息融合的上肢动作意图识别方法的可行性和实用性。和上肢末端位置实现对假手的控制。我们将上肢意图识别方法应用于机械假手的实验系统,并邀请多名志愿者进行测试。实验结果表明,意图识别成功率达到92.4%,验证了基于姿态信息和视觉信息融合的上肢动作意图识别方法的可行性和实用性。和上肢末端位置实现对假手的控制。我们将上肢意图识别方法应用于机械假手的实验系统,并邀请多名志愿者进行测试。实验结果表明,意图识别成功率达到92.4%,验证了基于姿态信息和视觉信息融合的上肢动作意图识别方法的可行性和实用性。
更新日期:2022-09-27
down
wechat
bug