当前位置: X-MOL 学术arXiv.cs.RO › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Robotic Grasp Manipulation Using Evolutionary Computing and Deep Reinforcement Learning
arXiv - CS - Robotics Pub Date : 2020-01-15 , DOI: arxiv-2001.05443
Priya Shukla, Hitesh Kumar and G. C. Nandi

Intelligent Object manipulation for grasping is a challenging problem for robots. Unlike robots, humans almost immediately know how to manipulate objects for grasping due to learning over the years. A grown woman can grasp objects more skilfully than a child because of learning skills developed over years, the absence of which in the present day robotic grasping compels it to perform well below the human object grasping benchmarks. In this paper we have taken up the challenge of developing learning based pose estimation by decomposing the problem into both position and orientation learning. More specifically, for grasp position estimation, we explore three different methods - a Genetic Algorithm (GA) based optimization method to minimize error between calculated image points and predicted end-effector (EE) position, a regression based method (RM) where collected data points of robot EE and image points have been regressed with a linear model, a PseudoInverse (PI) model which has been formulated in the form of a mapping matrix with robot EE position and image points for several observations. Further for grasp orientation learning, we develop a deep reinforcement learning (DRL) model which we name as Grasp Deep Q-Network (GDQN) and benchmarked our results with Modified VGG16 (MVGG16). Rigorous experimentations show that due to inherent capability of producing very high-quality solutions for optimization problems and search problems, GA based predictor performs much better than the other two models for position estimation. For orientation learning results indicate that off policy learning through GDQN outperforms MVGG16, since GDQN architecture is specially made suitable for the reinforcement learning. Based on our proposed architectures and algorithms, the robot is capable of grasping all rigid body objects having regular shapes.

中文翻译:

使用进化计算和深度强化学习的机器人抓取操作

用于抓取的智能对象操作对机器人来说是一个具有挑战性的问题。与机器人不同,由于多年来的学习,人类几乎立即知道如何操纵物体进行抓取。由于多年发展的学习技能,成年女性可以比孩子更熟练地抓握物体,而当今机器人抓握的缺乏迫使其表现远低于人类物体抓握基准。在本文中,我们通过将问题分解为位置和方向学习来应对开发基于学习的姿势估计的挑战。更具体地说,对于抓取位置估计,我们探索了三种不同的方法 - 基于遗传算法 (GA) 的优化方法,以最大限度地减少计算图像点和预测末端执行器 (EE) 位置之间的误差,一种基于回归的方法 (RM),其中机器人 EE 和图像点的收集数据点已使用线性模型进行回归,伪逆 (PI) 模型已以具有机器人 EE 位置和图像点的映射矩阵的形式制定几个观察。进一步为了掌握方向学习,我们开发了一个深度强化学习 (DRL) 模型,我们将其命名为 Grasp Deep Q-Network (GDQN),并使用 Modified VGG16 (MVGG16) 对我们的结果进行基准测试。严格的实验表明,由于为优化问题和搜索问题生成非常高质量的解决方案的固有能力,基于遗传算法的预测器比其他两种位置估计模型的性能要好得多。对于方向学习结果表明,通过 GDQN 的策略学习优于 MVGG16,因为 GDQN 架构是专门为强化学习而设计的。基于我们提出的架构和算法,机器人能够抓取所有具有规则形状的刚体物体。
更新日期:2020-01-16
down
wechat
bug