当前位置: X-MOL 学术Front. Neurorobotics › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Graph-Based Visual Manipulation Relationship Reasoning Network for Robotic Grasping.
Frontiers in Neurorobotics ( IF 3.1 ) Pub Date : 2021-08-13 , DOI: 10.3389/fnbot.2021.719731
Guoyu Zuo 1, 2 , Jiayuan Tong 1, 2 , Hongxing Liu 1, 2 , Wenbai Chen 3 , Jianfeng Li 4
Affiliation  

To grasp the target object stably and orderly in the object-stacking scenes, it is important for the robot to reason the relationships between objects and obtain intelligent manipulation order for more advanced interaction between the robot and the environment. This paper proposes a novel graph-based visual manipulation relationship reasoning network (GVMRN) that directly outputs object relationships and manipulation order. The GVMRN model first extracts features and detects objects from RGB images, and then adopts graph convolutional network (GCN) to collect contextual information between objects. To improve the efficiency of relation reasoning, a relationship filtering network is built to reduce object pairs before reasoning. The experiments on the Visual Manipulation Relationship Dataset (VMRD) show that our model significantly outperforms previous methods on reasoning object relationships in object-stacking scenes. The GVMRN model is also tested on the images we collected and applied on the robot grasping platform. The results demonstrated the generalization and applicability of our method in real environment.

中文翻译:

用于机器人抓取的基于图的视觉操作关系推理网络。

为了在物体堆叠场景中稳定有序地抓取目标物体,机器人推理物体之间的关系并获得智能操纵命令以实现机器人与环境之间更高级的交互非常重要。本文提出了一种新的基于图的视觉操作关系推理网络(GVMRN),它可以直接输出对象关系和操作顺序。GVMRN 模型首先从 RGB 图像中提取特征并检测对象,然后采用图卷积网络 (GCN) 收集对象之间的上下文信息。为了提高关系推理的效率,建立关系过滤网络以在推理之前减少对象对。在视觉操作关系数据集(VMRD)上的实验表明,我们的模型在对象堆叠场景中推理对象关系方面明显优于以前的方法。GVMRN 模型也在我们收集并应用在机器人抓取平台上的图像上进行了测试。结果证明了我们的方法在实际环境中的通用性和适用性。
更新日期:2021-08-13
down
wechat
bug