当前位置: X-MOL 学术J. Intell. Robot. Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Grasp Pose Detection with Affordance-based Task Constraint Learning in Single-view Point Clouds
Journal of Intelligent & Robotic Systems ( IF 3.3 ) Pub Date : 2020-05-23 , DOI: 10.1007/s10846-020-01202-3
Kun Qian , Xingshuo Jing , Yanhui Duan , Bo Zhou , Fang Fang , Jing Xia , Xudong Ma

Learning to grasp novel objects is a challenging issue for service robots, especially when the robot is performing goal-oriented manipulation or interaction tasks whilst only single-view RGB-D sensor data is available. While some visual approaches focus on grasping that satisfy force-closure standards only, we further link affordances-based task constraints to the grasp pose on object parts, so that both force-closure standard and task constraints can be ensured. In this paper, a new single-view approach is proposed for task-constrained grasp pose detection. We propose to learn a pixel-level affordance detector based on a convolutional neural network. The affordance detector provides a fine grained understanding of the task constraints on objects, which are formulated as a pre-segmentation stage in the grasp pose detection framework. The accuracy and robustness of grasp pose detection are improved by a novel method for calculating local reference frame as well as a position-sensitive fully convolutional neural network for grasp stability classification. Experiments on benchmark datasets have shown that our method outperforms the state-of-the-art methods. We have also validated our method in real-world and task-specific grasping scenes, in which higher success rate for task-oriented grasping is achieved.



中文翻译:

在单视图点云中通过基于负担的任务约束学习来掌握姿势检测

对于服务机器人来说,学习如何抓住新颖的物体是一个具有挑战性的问题,尤其是当机器人正在执行面向目标的操纵或交互任务,而只有单视图RGB-D传感器数据可用时。尽管一些视觉方法只专注于满足强制闭合标准的抓地力,但我们进一步将基于能力的任务约束与对象零件上的抓握姿势相关联,从而可以确保强制闭合标准和任务约束。本文提出了一种新的单视图方法,用于任务受限的抓握姿势检测。我们建议学习基于卷积神经网络的像素级可负担性检测器。可负担性检测器提供了对对象任务约束的细粒度了解,这些对象被约束为抓握姿势检测框架中的预分段阶段。通过一种用于计算局部参考系的新方法以及用于位置稳定性分类的位置敏感的全卷积神经网络,提高了位置姿态检测的准确性和鲁棒性。在基准数据集上进行的实验表明,我们的方法优于最新方法。我们还验证了我们的方法在现实世界中和特定于任务的抓取场景中的成功率,其中,针对任务的抓取获得了较高的成功率。

更新日期:2020-05-23
down
wechat
bug