当前位置: X-MOL 学术IEEE Robot. Automation Lett. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Efficient Learning of Goal-Oriented Push-Grasping Synergy in Clutter
IEEE Robotics and Automation Letters ( IF 5.2 ) Pub Date : 2021-06-25 , DOI: 10.1109/lra.2021.3092640
Kechun Xu , Hongxiang Yu , Qianen Lai , Yue Wang , Rong Xiong

We focus on the task of goal-oriented grasping, in which a robot is supposed to grasp a pre-assigned goal object in clutter and needs some pre-grasp actions such as pushes to enable stable grasps. However, in this task, the robot gets positive rewards from environment only when successfully grasping the goal object. Besides, joint pushing and grasping elongates the action sequence, compounding the problem of reward delay. Thus, sample inefficiency remains a main challenge in this task. In this letter, a goal-conditioned hierarchical reinforcement learning formulation with high sample efficiency is proposed to learn a push-grasping policy for grasping a specific object in clutter. In our work, sample efficiency is improved by two means. First, we use a goal-conditioned mechanism by goal relabeling to enrich the replay buffer. Second, the pushing and grasping policies are respectively regarded as a generator and a discriminator and the pushing policy is trained with supervision of the grasping discriminator, thus densifying pushing rewards. To deal with the problem of distribution mismatch caused by different training settings of two policies, an alternating training stage is added to learn pushing and grasping in turn. A series of experiments carried out in simulation and real world indicate that our method can quickly learn effective pushing and grasping policies and outperforms existing methods in task completion rate and goal grasp success rate by less times of motion. Furthermore, we validate that our system can also adapt to goal-agnostic conditions with better performance. Note that our system can be transferred to the real world without any fine-tuning. Our code is available at https://github.com/xukechun/Efficient_goal-oriented_push-grasping_synergy

中文翻译:

杂波中面向目标的推送抓取协同的高效学习

我们专注于面向目标的抓取任务,其中机器人应该在杂乱中抓取预先指定的目标对象,并且需要一些预抓取动作,例如推动以实现稳定抓取。然而,在这个任务中,机器人只有在成功抓住目标物体时才能从环境中获得积极的奖励。此外,联合推动和抓取拉长了动作序列,加剧了奖励延迟的问题。因此,样本效率低下仍然是这项任务的主要挑战。在这封信中,提出了一种具有高样本效率的目标条件分层强化学习公式,以学习用于在杂乱中抓取特定对象的推送抓取策略。在我们的工作中,通过两种方式提高了样本效率。首先,我们通过目标重新标记使用目标条件机制来丰富重放缓冲区。第二,推送策略和抓取策略分别被视为生成器和鉴别器,推送策略在抓取鉴别器的监督下进行训练,从而增加推送奖励。针对两种策略训练设置不同导致的分布不匹配问题,增加了交替训练阶段,依次学习推和抓。在模拟和现实世界中进行的一系列实验表明,我们的方法可以快速学习有效的推送和抓取策略,并且在任务完成率和目标抓取成功率方面优于现有方法,运动次数更少。此外,我们验证了我们的系统也可以以更好的性能适应目标不可知的条件。请注意,我们的系统无需任何微调即可转移到现实世界。https://github.com/xukechun/Efficient_goal-oriented_push-grasping_synergy
更新日期:2021-07-23
down
wechat
bug