当前位置: X-MOL 学术Sensors › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Task-Learning Strategy for Robotic Assembly Tasks from Human Demonstrations
Sensors ( IF 3.4 ) Pub Date : 2020-09-25 , DOI: 10.3390/s20195505
Guanwen Ding 1 , Yubin Liu 1 , Xizhe Zang 1 , Xuehe Zhang 1 , Gangfeng Liu 1 , Jie Zhao 1
Affiliation  

In manufacturing, traditional task pre-programming methods limit the efficiency of human–robot skill transfer. This paper proposes a novel task-learning strategy, enabling robots to learn skills from human demonstrations flexibly and generalize skills under new task situations. Specifically, we establish a markerless vision capture system to acquire continuous human hand movements and develop a threshold-based heuristic segmentation algorithm to segment the complete movements into different movement primitives (MPs) which encode human hand movements with task-oriented models. For movement primitive learning, we adopt a Gaussian mixture model and Gaussian mixture regression (GMM-GMR) to extract the optimal trajectory encapsulating sufficient human features and utilize dynamical movement primitives (DMPs) to learn for trajectory generalization. In addition, we propose an improved visuo-spatial skill learning (VSL) algorithm to learn goal configurations concerning spatial relationships between task-relevant objects. Only one multioperation demonstration is required for learning, and robots can generalize goal configurations under new task situations following the task execution order from demonstration. A series of peg-in-hole experiments demonstrate that the proposed task-learning strategy can obtain exact pick-and-place points and generate smooth human-like trajectories, verifying the effectiveness of the proposed strategy.

中文翻译:


来自人类演示的机器人装配任务的任务学习策略



在制造业中,传统的任务预编程方法限制了人机技能转移的效率。本文提出了一种新颖的任务学习策略,使机器人能够灵活地从人类演示中学习技能,并在新的任务情况下泛化技能。具体来说,我们建立了一个无标记视觉捕捉系统来获取连续的人类手部动作,并开发了一种基于阈值的启发式分割算法,将完整的动作分割成不同的运动基元(MP),这些基元用面向任务的模型对人类手部动作进行编码。对于运动基元学习,我们采用高斯混合模型和高斯混合回归(GMM-GMR)来提取封装足够人体特征的最佳轨迹,并利用动态运动基元(DMP)来学习轨迹泛化。此外,我们提出了一种改进的视觉空间技能学习(VSL)算法来学习有关任务相关对象之间的空间关系的目标配置。只需要一次多操作演示即可进行学习,机器人可以按照演示中的任务执行顺序概括新任务情况下的目标配置。一系列的孔洞实验表明,所提出的任务学习策略可以获得精确的拾放点并生成平滑的类人轨迹,验证了所提出策略的有效性。
更新日期:2020-09-25
down
wechat
bug