当前位置: X-MOL 学术Appl. Bionics Biomech. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Hierarchical Task-Parameterized Learning from Demonstration for Collaborative Object Movement
Applied Bionics and Biomechanics ( IF 2.2 ) Pub Date : 2019-12-02 , DOI: 10.1155/2019/9765383
Siyao Hu 1 , Katherine J Kuchenbecker 1, 2
Affiliation  

Learning from demonstration (LfD) enables a robot to emulate natural human movement instead of merely executing preprogrammed behaviors. This article presents a hierarchical LfD structure of task-parameterized models for object movement tasks, which are ubiquitous in everyday life and could benefit from robotic support. Our approach uses the task-parameterized Gaussian mixture model (TP-GMM) algorithm to encode sets of demonstrations in separate models that each correspond to a different task situation. The robot then maximizes its expected performance in a new situation by either selecting a good existing model or requesting new demonstrations. Compared to a standard implementation that encodes all demonstrations together for all test situations, the proposed approach offers four advantages. First, a simply defined distance function can be used to estimate test performance by calculating the similarity between a test situation and the existing models. Second, the proposed approach can improve generalization, e.g., better satisfying the demonstrated task constraints and speeding up task execution. Third, because the hierarchical structure encodes each demonstrated situation individually, a wider range of task situations can be modeled in the same framework without deteriorating performance. Last, adding or removing demonstrations incurs low computational load, and thus, the robot’s skill library can be built incrementally. We first instantiate the proposed approach in a simulated task to validate these advantages. We then show that the advantages transfer to real hardware for a task where naive participants collaborated with a Willow Garage PR2 robot to move a handheld object. For most tested scenarios, our hierarchical method achieved significantly better task performance and subjective ratings than both a passive model with only gravity compensation and a single TP-GMM encoding all demonstrations.

中文翻译:

协作对象移动演示的分层任务参数化学习

从演示中学习(LfD)使机器人能够模仿人类的自然运动,而不仅仅是执行预编程的行为。本文介绍了用于对象移动任务的任务参数化模型的分层LfD结构,该模型在日常生活中无处不在,并且可以从机器人支持中受益。我们的方法使用任务参数化的高斯混合模型(TP-GMM)算法在单独的模型中对演示集进行编码,每个模型分别对应于不同的任务情况。然后,机器人可以通过选择良好的现有模型或请求新的演示来在新情况下最大化其预期性能。与将所有演示针对所有测试情况一起编码的标准实现相比,该方法具有四个优点。第一,通过计算测试情况与现有模型之间的相似性,可以使用简单定义的距离函数来估计测试性能。其次,所提出的方法可以改善通用性,例如,更好地满足演示的任务约束并加快任务执行速度。第三,由于分层结构分别对每个演示情况进行编码,因此可以在同一框架中对更广泛的任务情况进行建模,而不会降低性能。最后,添加或删除演示会降低计算量,因此,可以逐步构建机器人的技能库。我们首先在模拟任务中实例化提出的方法以验证这些优势。然后,我们展示了将优势转移到真正的硬件上的任务,其中天真的参与者与Willow Garage PR2机器人协作来移动手持对象。对于大多数经过测试的场景,我们的分层方法比仅具有重力补偿的被动模型和对所有演示进行编码的单个TP-GMM都具有明显更好的任务性能和主观评分。
更新日期:2019-12-02
down
wechat
bug