当前位置: X-MOL 学术Robot. Intell. Autom. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Adaptive sensor fusion labeling framework for hand pose recognition in robot teleoperation
Robotic Intelligence and Automation ( IF 2.1 ) Pub Date : 2021-02-15 , DOI: 10.1108/aa-11-2020-0178
Wen Qi , Xiaorui Liu , Longbin Zhang , Lunan Wu , Wenchuan Zang , Hang Su

Purpose

The purpose of this paper is to mainly center on the touchless interaction between humans and robots in the real world. The accuracy of hand pose identification and stable operation in a non-stationary environment is the main challenge, especially in multiple sensors conditions. To guarantee the human-machine interaction system’s performance with a high recognition rate and lower computational time, an adaptive sensor fusion labeling framework should be considered in surgery robot teleoperation.

Design/methodology/approach

In this paper, a hand pose estimation model is proposed consisting of automatic labeling and classified based on a deep convolutional neural networks (DCNN) structure. Subsequently, an adaptive sensor fusion methodology is proposed for hand pose estimation with two leap motions. The sensor fusion system is implemented to process depth data and electromyography signals capturing from Myo Armband and leap motion, respectively. The developed adaptive methodology can perform stable and continuous hand position estimation even when a single sensor is unable to detect a hand.

Findings

The proposed adaptive sensor fusion method is verified with various experiments in six degrees of freedom in space. The results showed that the clustering model acquires the highest clustering accuracy (96.31%) than other methods, which can be regarded as real gestures. Moreover, the DCNN classifier gets the highest performance (88.47% accuracy and lowest computational time) than other methods.

Originality/value

This study can provide theoretical and engineering guidance for hand pose recognition in surgery robot teleoperation and design a new deep learning model for accuracy enhancement.



中文翻译:

用于机器人遥操作中手姿势识别的自适应传感器融合标记框架

目的

本文的目的主要是围绕现实世界中人与机器人之间的非接触式交互。手势识别的准确性和在非平稳环境中的稳定操作是主要挑战,尤其是在多个传感器条件下。为了确保人机交互系统具有较高的识别率和较低的计算时间,在外科手术机器人遥操作中应考虑采用自适应传感器融合标记框架。

设计/方法/方法

本文提出了一种手势估计模型,该模型由自动标注和基于深度卷积神经网络(DCNN)结构进行分类组成。随后,提出了一种自适应传感器融合方法,用于两个跳跃运动的手势估计。传感器融合系统用于处理从Myo Armband和跳跃运动分别捕获的深度数据和肌电信号。即使当单个传感器无法检测到手时,开发的自适应方法也可以执行稳定且连续的手位置估计。

发现

所提出的自适应传感器融合方法已在六个自由度的空间中通过各种实验得到验证。结果表明,该聚类模型具有比其他方法最高的聚类精度(96.31%),可视为真实手势。而且,与其他方法相比,DCNN分类器具有最高的性能(88.47%的准确性和最低的计算时间)。

创意/价值

该研究可以为外科手术机器人遥操作中的手势识别提供理论和工程指导,并设计一种新的深度学习模型以提高准确性。

更新日期:2021-02-11
down
wechat
bug