当前位置: X-MOL 学术Concurr. Comput. Pract. Exp. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An object perception and positioning method via deep perception learning object detection
Concurrency and Computation: Practice and Experience ( IF 1.5 ) Pub Date : 2021-01-24 , DOI: 10.1002/cpe.6203
Limei Xiao 1 , Yachao Zhang 1, 2 , Weizhe Gao 1 , Dayou Xu 1 , Ce Li 1
Affiliation  

One of the fundamental problems when building perception systems for robot is to be able to provide semantic information as well as positioning in three-dimensional (3D) space. However, two-dimensional (2D) object detectors only can provide the semantic information and pixel coordinate in 2D space. While, the depth image can reflect the relative distance, and the semantic description of the object is poor. In this article, a novel object perception and positioning method via deep perception learning object detection is proposed. First, the RGB image and depth image are collected through the Kinect, and the depth image is processed to ensure the robustness of the model. Then, the RGB image can obtain the object semantic and pixel location information through an object detector based on deep learning. Finally, the object size measurement and 3D positioning are realized by combining the pixel location and the depth information. As a result, the advantages of very accurate 2D detector and the accurate depth information can be effectively captured in our model. Experimental results demonstrate that our method achieves a high accuracy of size measurement and spatial positioning.

中文翻译:

一种基于深度感知学习目标检测的目标感知和定位方法

构建机器人感知系统时的基本问题之一是能够提供语义信息以及在三维(3D)空间中的定位。然而,二维(2D)目标检测器只能提供2D空间中的语义信息和像素坐标。而深度图像只能反映相对距离,对物体的语义描述较差。在本文中,提出了一种通过深度感知学习对象检测的新型对象感知和定位方法。首先通过Kinect采集RGB图像和深度图像,并对深度图像进行处理以保证模型的鲁棒性。然后,RGB图像可以通过基于深度学习的对象检测器获取对象语义和像素位置信息。最后,通过结合像素位置和深度信息实现物体尺寸测量和3D定位。因此,我们的模型可以有效地捕捉非常准确的 2D 探测器和准确的深度信息的优势。实验结果表明,我们的方法实现了高精度的尺寸测量和空间定位。
更新日期:2021-01-24
down
wechat
bug