当前位置: X-MOL 学术Comput. Electr. Eng. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Generative Robotic Grasping Using Depthwise Separable Convolution
Computers & Electrical Engineering ( IF 4.3 ) Pub Date : 2021-07-14 , DOI: 10.1016/j.compeleceng.2021.107318
Yadong Teng 1 , Pengxiang Gao 1
Affiliation  

In this paper, we present an end-to-end approach method using deep learning for grasp detection. Our method is a real-time processing method for discrete depth image sampling and the problems of long calculation times and difficulty in registration caused by object modelling and global searching in traditional methods. The method uses depthwise convolution and pointwise convolution to model the relations among the channels and directly parameterizes a grasp quality value for every pixel. Our method calculates a rectangular grasping box to generate a grasping pose for an input image. For the experimental evaluation on the Jacquard dataset, we compared the proposed method with other baseline methods, and the accuracy of the proposed method was improved by 5% to 7% that shows our method can effectively predict grasp points on novel class objects.



中文翻译:

使用深度可分离卷积的生成机器人抓取

在本文中,我们提出了一种使用深度学习进行抓握检测的端到端方法。我们的方法是一种离散深度图像采样的实时处理方法,解决了传统方法中对象建模和全局搜索带来的计算时间长、配准困难的问题。该方法使用深度卷积和逐点卷积对通道之间的关系进行建模,并直接参数化每个像素的抓取质量值。我们的方法计算一个矩形抓取框来为输入图像生成抓取姿势。对于提花数据集的实验评估,我们将所提出的方法与其他基线方法进行了比较,所提出方法的准确率提高了 5% 到 7%,表明我们的方法可以有效地预测新类对象上的抓取点。

更新日期:2021-07-14
down
wechat
bug