当前位置: X-MOL 学术Robot. Comput.-Integr. Manuf. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
iTP-LfD: Improved task parametrised learning from demonstration for adaptive path generation of cobot
Robotics and Computer-Integrated Manufacturing ( IF 10.4 ) Pub Date : 2020-12-17 , DOI: 10.1016/j.rcim.2020.102109
Shirine El Zaatari , Yuqi Wang , Weidong Li , Yiqun Peng

An approach of Task-Parameterised Learning from Demonstration (TP-LfD) aims at automatically adapting the movements of collaborative robots (cobots) to new settings using knowledge learnt from demonstrated paths. The approach is suitable for encoding complex relations between a cobot and its surrounding, i.e., task-relevant objects. However, further efforts are still required to enhance the intelligence and adaptability of TP-LfD for dynamic tasks. With this aim, this paper presents an improved TP-LfD (iTP-LfD) approach to program cobots adaptively for a variety of industrial tasks. iTP-LfD comprises of three main improvements over other developed TP-LfD approaches: 1) detecting generic visual features for frames of reference (frames) in demonstrations for path reproduction in new settings without using complex computer vision algorithms, 2) minimising redundant frames that belong to the same object in demonstrations using a statistical algorithm, and 3) designing a reinforcement learning algorithm to eliminate irrelevant frames. The distinguishing characteristic of the iTP-LfD approach is that optimal frames are identified from demonstrations by simplifying computational complexity, overcoming occlusions in new settings, and boosting the overall performance. Case studies for a variety of industrial tasks involving different objects and scenarios highlight the adaptability and robustness of the iTP-LfD approach.



中文翻译:

TP-LFD:改进任务parametrised从示范自适应路径代合作机器人的学习

任务参数化的示范学习(TP-LfD)方法旨在利用从演示路径中学到的知识自动使协作机器人(cobot)的运动适应新的设置。该方法适合于编码协作机器人与其周围环境(即与任务相关的对象)之间的复杂关系。但是,仍需要进一步的努力来增强TP-LfD在动态任务中的智能性和适应性。出于这个目的,本文提出了一种改进的TP-LfD(i TP-LfD)方法,可以针对各种工业任务自适应地对协作机器人进行编程。一世TP-LfD相对于其他已开发的TP-LfD方法包括三个主要改进:1)在不使用复杂的计算机视觉算法的情况下,在新设置的路径再现演示中检测参考帧(帧)的通用视觉特征; 2)最小化冗余帧在演示中使用统计算法属于同一对象,并且3)设计强化学习算法以消除不相关的帧。i的区别特征TP-LfD方法是通过简化计算复杂性,克服新设置中的遮挡并提高整体性能,从演示中确定最佳帧。针对涉及不同对象和场景的各种工业任务的案例研究突出了i TP-LfD方法的适应性和鲁棒性。

更新日期:2020-12-17
down
wechat
bug