当前位置: X-MOL 学术Ind. Rob. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Deep instance segmentation and 6D object pose estimation in cluttered scenes for robotic autonomous grasping
Industrial Robot ( IF 1.8 ) Pub Date : 2020-04-20 , DOI: 10.1108/ir-12-2019-0259
Yongxiang Wu , Yili Fu , Shuguo Wang

Purpose

This paper aims to design a deep neural network for object instance segmentation and six-dimensional (6D) pose estimation in cluttered scenes and apply the proposed method in real-world robotic autonomous grasping of household objects.

Design/methodology/approach

A novel deep learning method is proposed for instance segmentation and 6D pose estimation in cluttered scenes. An iterative pose refinement network is integrated with the main network to obtain more robust final pose estimation results for robotic applications. To train the network, a technique is presented to generate abundant annotated synthetic data consisting of RGB-D images and object masks in a fast manner without any hand-labeling. For robotic grasping, the offline grasp planning based on eigengrasp planner is performed and combined with the online object pose estimation.

Findings

The experiments on the standard pose benchmarking data sets showed that the method achieves better pose estimation and time efficiency performance than state-of-art methods with depth-based ICP refinement. The proposed method is also evaluated on a seven DOFs Kinova Jaco robot with an Intel Realsense RGB-D camera, the grasping results illustrated that the method is accurate and robust enough for real-world robotic applications.

Originality/value

A novel 6D pose estimation network based on the instance segmentation framework is proposed and a neural work-based iterative pose refinement module is integrated into the method. The proposed method exhibits satisfactory pose estimation and time efficiency for the robotic grasping.



中文翻译:

杂乱场景中的深度实例分割和6D对象姿态估计,用于机器人自主抓取

目的

本文旨在设计一种用于在杂乱场景中进行对象实例分割和六维(6D)姿态估计的深度神经网络,并将该方法应用于实际的家庭对象机器人自动抓取中。

设计/方法/方法

提出了一种新颖的深度学习方法,用于在混乱场景中进行实例分割和6D姿态估计。迭代姿态优化网络与主网络集成在一起,可为机器人应用获得更可靠的最终姿态估计结果。为了训练网络,提出了一种无需任何人工标记即可快速生成包含RGB-D图像和对象蒙版的大量带注释的合成数据的技术。对于机器人抓取,执行基于特征抓图计划器的离线抓取计划,并将其与在线对象姿态估计相结合。

发现

在标准姿态基准数据集上进行的实验表明,与基于深度ICP精炼的最新方法相比,该方法可实现更好的姿态估计和时间效率性能。所提出的方法还在具有英特尔Realsense RGB-D摄像头的7个自由度Kinova Jaco机器人上进行了评估,抓取结果表明,该方法对于现实世界的机器人应用足够准确且可靠。

创意/价值

提出了一种基于实例分割框架的新型6D姿态估计网络,并将基于神经工作的迭代姿态细化模块集成到该方法中。所提出的方法表现出令人满意的姿势估计和机器人抓握的时间效率。

更新日期:2020-04-20
down
wechat
bug