当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards Generalization and Data Efficient Learning of Deep Robotic Grasping
arXiv - CS - Robotics Pub Date : 2020-07-02 , DOI: arxiv-2007.00982
Zhixin Chen, Mengxiang Lin, Zhixin Jia and Shibo Jian

Deep reinforcement learning (DRL) has been proven to be a powerful paradigm for learning complex control policy autonomously. Numerous recent applications of DRL in robotic grasping have successfully trained DRL robotic agents end-to-end, mapping visual inputs into control instructions directly, but the amount of training data required may hinder these applications in practice. In this paper, we propose a DRL based robotic visual grasping framework, in which visual perception and control policy are trained separately rather than end-to-end. The visual perception produces physical descriptions of grasped objects and the policy takes use of them to decide optimal actions based on DRL. Benefiting from the explicit representation of objects, the policy is expected to be endowed with more generalization power over new objects and environments. In addition, the policy can be trained in simulation and transferred in real robotic system without any further training. We evaluate our framework in a real world robotic system on a number of robotic grasping tasks, such as semantic grasping, clustered object grasping, moving object grasping. The results show impressive robustness and generalization of our system.

中文翻译:

深度机器人抓取的泛化和数据高效学习

深度强化学习 (DRL) 已被证明是自主学习复杂控制策略的强大范例。最近许多 DRL 在机器人抓取中的应用已经成功地训练了 DRL 机器人代理,将视觉输入直接映射到控制指令中,但所需的训练数据量可能会阻碍这些在实践中的应用。在本文中,我们提出了一种基于 DRL 的机器人视觉抓取框架,其中视觉感知和控制策略是分开训练的,而不是端到端的。视觉感知产生抓取物体的物理描述,策略利用它们来决定基于 DRL 的最佳动作。受益于对象的显式表示,该策略有望被赋予对新对象和环境的更多泛化能力。此外,该策略可以在模拟中训练并在真实机器人系统中转移,无需任何进一步的训练。我们在现实世界的机器人系统中评估了我们的框架,用于许多机器人抓取任务,例如语义抓取、聚类对象抓取、移动对象抓取。结果表明我们的系统具有令人印象深刻的鲁棒性和泛化性。
更新日期:2020-07-03
down
wechat
bug